Cut the Red Tape
Author
Mario Loyola - Florida International University Law School
Florida International University Law School
Current Issue
Issue
6
Cut the Red Tape

The United States has the world’s most costly, time-consuming, and unpredictable system for authorizing big infrastructure projects. It puts America at a grave competitive disadvantage compared with other industrial powers, including China. The social costs are enormous and are passed on to consumers, who must ultimately pay a premium for elevated risk and constricted supply. It deprives Americans of affordable energy, adequate roadways, and even safe drinking water.

And if you think the climate crisis is “code red for humanity,” as President Biden has said, the hard truth is this: Until Congress reforms the entire permitting system, the goal of a clean energy transition is almost certainly unachievable.

Consider the staggering amount of infrastructure that would be required to meet the administration’s goal of a zero-carbon electricity grid by 2035: scores of new nuclear plants, hundreds or thousands of new utility-scale solar plants, tens of thousands of windmills, hundreds of thousands of miles of transmission lines. Under current law and given agency workforce constraints, securing permits for all those projects in time to finish, or in some cases even to start, construction before 2035 is simply a fantasy.

Congress has appropriated nearly $2 trillion for “green” infrastructure. But money is not the limiting factor in America’s ability to deploy major infrastructure projects. The crucial limiting factor today—and the main obstacle to a clean energy transition going forward—is the massive amount of federal agency resources consumed by the struggle to comply with the National Environmental Policy Act in a context of inordinate litigation risk.

Section 102(2)(C) of NEPA requires agencies to prepare an environmental impact statement for any “major federal actions significantly affecting the quality of the human environment.” Any federal permit required for a major infrastructure project usually triggers the requirement of an EIS.

According to a recent survey by the White House Council on Environmental Quality, which was created by NEPA to oversee its implementation, the preparation of a typical EIS takes on average 4.5 years, consumes tens of thousands of agency person-hours, and costs millions of dollars in taxpayer resources. That’s on the top of the tens of millions an EIS can cost project proponents. So even with the most lavishly funded bureaucracy on Earth, the entire federal government produces at most 75 or 80 final EISs every year. That pace is woefully short of what is needed to reach the 2035 zero-carbon goal.

To give some sense of what this looks like on the ground, the Bureau of Land Management’s Nevada State Office, where dozens of solar projects would have to be evaluated, is totally overwhelmed by the effort to complete one EIS every year or two. The Nevada office has issued a “Prioritization Guidance” to help it select the small handful of applications its staff can handle over the next couple of years from among the flood of solar permit applications.

By the time Senators Joe Manchin (D-WV) and Chuck Schumer (D-NY) agreed to streamline permitting as a side-deal to the Inflation Reduction Act, the 117th Congress had not done much of anything to lay the political groundwork for sweeping reform. Not surprisingly, what emerged from the deal was a potpourri of disconnected measures responding in most cases to the demands of narrow special interest groups and falling far short of what would be required for a clean energy transition by 2035. Even with the most dire stakes imaginable, the most that policymakers have been able to accomplish is tinkering at the margins.

Any serious effort to undertake a clean energy transition must start with a close look at the staggering amount of clean energy infrastructure that would be required. The next step is to wrap one’s head around the frightful tangle of red tape that turns the federal permitting process for most such projects into a years-long odyssey. That exercise sheds light on some of what Congress will have to do if it ever gets serious about the obstacles to a clean energy transition.

There are many estimates of the power capacity additions that would be required for a net-zero energy sector, most of them in the same general ballpark. For example, the Electric Power Research Institute estimates that to achieve a zero-carbon electrical system by 2035, the grid would need to add 900 gigawatts of new wind and solar, 80 GW of new nuclear capacity (doubling current nuclear capacity nationwide), and 200 GW of hydrogen-fueled turbines.

Many estimates don’t mention nuclear at all. That’s because powerful environmental advocacy groups remain adamantly opposed to it, which may also explain why Democrats have put virtually no effort into advancing nuclear power. That is a major obstacle to the clean energy transition in itself, because most scenarios aim to replace the “dispatchable” baseload generation of coal and natural gas plants with intermittent wind and solar, creating significant challenges for reliability and capacity. Utility-scale batteries, smart grids, and similar technologies have come a long way but the challenge of intermittency is why prominent international authorities call for a doubling and even tripling of nuclear power around the world for any chance of meeting the Paris Agreement’s goal of limiting warming to no more than 1.5 degrees Celsius.

The American nuclear fleet is dwindling and there are no plans to build any new nuclear plants in the United States. But even if there were, they couldn’t be part of the clean electricity mix in EPRI’s estimate. The permitting timeline for nuclear is the longest of any infrastructure sector. A nuclear reactor due to open in Georgia in the next couple of years started its odyssey through the federal permitting process in 2006, after many years of project design and development. Nuclear regulatory reform is urgently needed, but Congress has done virtually nothing about it.

One notably optimistic review of 11 studies of non-nuclear pathways to clean electricity by 2030 and 2035, by Energy Innovation LLC, shows a consistent estimate across studies of about one terawatt of solar and wind, plus 100 GW of battery storage. That review notes that this would require an average annual deployment of new renewable energy capacity at double or triple the record rate of 31 GW of wind and solar additions in 2020, “a challenging but feasible pace of development.”

The authors don’t elaborate on why they think that would be “feasible,” perhaps because they have been spared the trials and tribulations of going through the NEPA process. But it isn’t feasible—not remotely. Since the early Obama administration, federal agencies have strained to streamline their permitting processes and increase throughput. They are virtually at the limit of the streamlining that current law will allow without leaving their permits and NEPA reviews vulnerable to court challenge.

As many experts have noted, the fear of litigation risk is the main source of cost, delay, and uncertainty in the NEPA process. It is also the crucial limiting factor in the clean energy transition. Litigation risk has the entire federal bureaucracy backed up against a wall, struggling to produce permits and EISs that are perfect in every last detail, whether relevant to the agency decisionmaker or not. (The statutory purpose of NEPA, incidentally, is to inform the agency decisionmaker.) This means that without changes in the law, the only way to double or triple the pace of permitting at federal agencies is by doubling or tripling the size of the federal workforce involved in project reviews.

Reliable estimates are hard to come by, but a reasonable guess is that on the order of 10,000 federal agency staff spend most of their time involved in processing permit applications for infrastructure projects. To get a sense of how much the federal permitting bureaucracy would have to grow, let’s take a look at the most significant increase in that workforce produced in the entire 117th Congress, namely the Inflation Reduction Act’s provision of nearly $1 billion to increase permitting staff over five years, including $350 million for an Environmental Review Improvement Fund at the Federal Permitting Improvement Steering Council, which was created under the 2015 Fixing America’s Surface Transportation Act to coordinate the permitting of major infrastructure projects. This massive boost in funding would add perhaps five or six hundred full-time equivalents to that workforce. That’s an increase of maybe five percent, assuming agencies can find and train qualified personnel in this highly technical field quickly enough. The added staff would significantly help with the current backlog of applications, but the total would fall woefully short of the needed doubling of personnel.

As unrealistic as it is to think that we could double the size of the federal permitting workforce quickly enough to make a difference, there is yet another problem with Energy Innovation’s hopeful estimates. Its calculation of the required increase in average permitting pace presupposes a time horizon of 10 or 15 years, depending on whether we’re looking at 2030 or 2035. But that doesn’t take any account of the actual timeline for deploying infrastructure projects, which entails several years of preapplication and has to be followed by several years of actual construction.

Between the bookends of preapplication and construction, permitting time for solar projects, according to the Solar Energy Industry Association, can be between three and five years. That means that to achieve net-zero by 2030 is already impossible: Projects that begin pre-application in this coming year generally won’t be coming online until 2030 at the earliest. And even for a clean electricity transition to occur by 2035, all the projects necessary for a roughly one terawatt addition of renewable electricity would have to finish pre-application and file their permits by 2027 at the latest. Then all those permits would have to be processed and the environmental reviews completed within three or four years. Hence the effective permitting window for a clean energy transition by 2035 is 2025-2032, a period of just seven years, not 15 as in the Energy Innovation’s estimates.

So during that main wave of permit processing and environmental review, the processing rate would have to be at least four times the rate of the record year of 2020, and perhaps significantly faster than that. In other words, Congress would have to at least quadruple or quintuple the size of the federal permitting workforce.

Now consider the hurdles facing the actual projects. Taking solar as an example, most studies suggest that the United States would have to add on the order of 500 GW of utility solar capacity. Suppose that each solar project in that total is very large, with a nameplate capacity of 500 MW. Adding 500 GW of solar capacity would require 1,000 such projects. Judging by the largest currently in operation, each such solar project would cover perhaps 5,000 acres, for a total of 5,000,000 acres. That’s the entire state of New Jersey—covered in solar panels.

Many of those solar projects won’t require federal permits at all, particularly if they aren’t built on federal land. But where the sun shines for 365 days a year is in the deserts and high plains of the western states—where the federal government owns virtually all the land. And every solar project built on federal land requires its own permit and its own EIS.

The NEPA process is tailor made for NIMBY-ism. “Scoping” allows local opponents to lodge issues that agencies must explore at length, and which can later be litigated. Each solar project application entails political trauma for regional agency staff and often for the agency headquarters as well. Worse still, covering an area the size of New Jersey with solar panels will have a myriad of environmental consequences, each of which must be studied in detail and avoided, minimized, or mitigated if possible—and many of which might impel the reasonable conservationist to ask, “Is this really worth it?” Anyone who has seen the leach fields for disposal of lithium batteries, where birds die within seconds of alighting, should wonder.

Then those solar and wind projects need to be connected to the grid by a network of new transmission lines. Linear projects such as transmission towers and pipelines are among the most resource-intensive permits for agencies to process. That’s because linear projects trigger permit requirements—and fierce local opposition—all along their route. All of this slows the already slow permitting process to a crawl. To give one example, the Transwest Express Transmission Line, running for 700 miles and with a capacity of 3 GW, was designed to transmit wind power from Wyoming to Nevada and California. It took 15 years to get the permits required for construction to begin.

The clean energy transition will entail transmission lines on a scale that most Americans can’t imagine. Wind and solar must be built where the wind blows and the sun shines, not where consumers are. Hence each megawatt of renewable capacity will require orders of magnitude more transmission line miles than each megawatt requires currently, and average length will grow exponentially as developers go looking further and further afield from their target markets for suitable sites. According to a National Academies report, the net-zero 2050 goals would require construction of one million miles of transmission lines by 2050.

Given the much longer lead times on transmission lines compared to renewable energy power plants, it’s easy to see another looming problem: solar plants sitting idle in the middle of nowhere for years on end, waiting for transmission lines to arrive. Indeed this is already happening, as in the case of the Cardinal-Hickory Creek transmission project in Iowa and Wisconsin.

A series of interrelated structural problems combine to create inordinate delays, costs, and uncertainties for infrastructure projects. Of those impacts the worst by far is uncertainty, the major source of risk to capital formation and hence a principal source of the significant social losses caused by the NEPA process.

Unfortunately, that uncertainty has many sources, most important of which is litigation risk, which maximizes the amount of time and resources agencies devote to processing permit applications out of all proportion to the environmental costs and benefits at stake.

The uncertainty begins with the inordinate litigation risk that hangs like a cloud over every EIS from the start. The problem has been years in the making. It started in the 1970s, with the invention of Court-ordered “hard look” NEPA review, which along with Chevron deference—another decision, requiring courts to favor agency positions where statutes are unclear—a few years later turned the standards of review spelled out in Section 706 of the Administrative Procedure Act upside down. (Where Section 706 specifies that courts are to review questions of law de novo and set aside agency actions only if they are “arbitrary and capricious,” courts now defer to agencies on questions of law and second guess agency findings on technical matters that judges struggle to understand at all.)

A related problem is that there is no doctrine of substantial performance or materiality: An agency may get an EIS 99.9 percent perfect, but if it forgot to study the habitat needs of the butterfly that one person casually mentioned in a town hall meeting during scoping—boom, permit vacated. Agencies have to think of literally everything, because the omission of one paragraph in a 1,000-page document could be “arbitrary and capricious.” The purpose of NEPA is to inform the decisionmaker, which creates an implied standard of materiality for every impact and alternative under consideration. Alas, federal courts have combined with the CEQ regulation of NEPA to require agencies to study impacts well upstream and downstream of the project—even if those impacts are entirely in the control of other governments, in much greater detail than is remotely relevant to the permitting decision. And because of the loose wording of the NEPA regulations, agencies devote hundreds of pages in EISs to studying alternatives to the proposed project when what the statute requires is consideration of alternatives to the proposed action, which in the case of an infrastructure project is just the up-or-down permitting decision.

It’s no surprise that agencies only win about 70 percent of cases in court. Defenders of NEPA tout this as evidence that agencies prevail “most of the time” so litigation isn’t that big a deal, but in reality it’s an atrocious figure, considering the endless time and resources agencies devote to complying with every last detail that the law might require. District courts face a similar rate of reversal on appeal, but of course only a tiny fraction of judgments get appealed, whereas the litigation risk for a final EIS is virtually 100 percent. And district courts don’t spend 4.5 years, tens of thousands of hours, and millions of dollars trying to make absolutely certain that they get everything right, and thankfully so because if they did you’d have a complete breakdown in the administration of justice—an apt description of NEPA litigation.

Many judges appear to be operating on an unstated and perhaps unconscious premise that environmental advocacy groups represent the public interest but agencies do not. This manifests in a damaging relaxation of procedural protections that defendants normally enjoy. Courts have bent over backwards to confer standing on virtually anyone who wants to oppose a project. NEPA creates no right of action, so courts had to find one in the stopgap enforcement provision of the APA. That requires “legal harm” for standing, but courts look past that for environmental advocacy groups, by resort to the “zone of interest” theory of “procedural standing,” piling one ancillary stopgap on another. So if you go boating on a lake you have standing to sue FERC over a transmission line that will be partly visible from the lake, despite that the transmission line is urgently needed to connect a small city to a renewable power source that is sitting idle after $100 million of investment.

Once in court, the red carpet treatment continues. When asking for a preliminary injunction, a plaintiff must normally post a bond to protect the defendant against losses resulting from the injunction should the plaintiff ultimately lose. Courts waive that for environmental litigants, because of the “public interest.” And when it comes time to balance the equities in granting the injunction, courts give short shrift to the public interest in effective agency action, or ignore it entirely. Indeed, in the 9th Circuit, stopping a project is considered to cause no harm to the agency because ipso facto stopping a project won’t harm the environment—as if environmental losses are the only losses we need to worry about when deciding to stop an infrastructure project of urgent national importance, where developers have invested tens or hundreds of millions of dollars.

Another major problem is the very existence of the CEQ regulation of NEPA, which dramatically increases the litigation target area of every project review. This is a fascinating issue, because CEQ has no rulemaking authority. The regulation is arguably nothing more than an executive order, like E.O. 12866, which establishes the Office of Management and Budget rulemaking process for federal agencies. Teleporting the “legal harm” and “procedural standing” doctrines into a document that creates no private rights or obligations, courts have transformed the CEQ regulation into a compendium of legally enforceable requirements. Hundreds of federal permits have been vacated by courts because of agencies’ failures to comply with supposed NEPA requirements that are not in the statute and that were invented by CEQ out of thin air. But without foundation in delegated rulemaking authority, the regulation of NEPA is just a set of directives to agency heads. Presidential directives such as executive orders have never been considered enforceable de jure and draw the entirety of their compelling force from the president’s removal power, which does not extend to independent agencies like FERC. In the key NEPA case of Public Citizen v. Department of Transportation, Justice Clarence Thomas wrote that “CEQ was established by NEPA with authority to issue regulations interpreting it,” but the statute doesn’t say that anywhere, and it’s simply not true. Plus, even if courts defer to the council’s statutory interpretations, it’s another thing entirely for CEQ to use purely presidential directive authority to instruct an agency to discuss “cumulative impacts” (a concept nowhere to be found in the statute) and then have courts treat that directive as if it were legally enforceable in a lawsuit brought by a private party. It’s the exact equivalent of the president instructing federal staff to observe a business dress code and a private citizen suing because some agencies have casual Friday.

Another major problem with the permitting process is the hydra-headed nature of agency permitting authorities. The description is not totally apt because the hydra at least had a single body, whereas the permitting processes of federal agencies are almost completely disconnected—despite manifold interdependencies. Efforts by multiple administrations to establish a coordinated process quickly run up against the reality of statutory structure, a problem that only Congress can fix. The CEQ regulation’s provisions on a “lead agency” to prepare a single NEPA document in coordination with “cooperating agencies” doesn’t relieve the project developer of basically having to create an interagency process from scratch among a bunch of agencies that often couldn’t care less what the developer has to say on any subject.

A related problem is the fact that agencies take it on themselves to prepare environmental documents that the developer could prepare instead, much faster and just as well, subject to agency verification and approval, as is done in Australia for example. That is one of the most important changes in the 2020 Trump revisions to NEPA, which were partly pulled back by the Biden administration to placate environmental advocacy groups, despite the fact that renewable energy companies were the disproportionate beneficiaries of the Trump reform.

The problems I’ve described create a mountain of obstacles to any clean energy transition, and only Congress can remove them. Although polls show significant public concern with the effects of climate change, the issue is not the most important for most Americans, who are primarily worried about inflation and other issues. Perhaps that explains why Congress has failed thus far to enact comprehensive reforms of the sort that would be needed for a successful clean energy transition. TEF

COVER STORY I The federal project review process is a daunting obstacle to any clean energy transition. Until Congress reforms the entire permitting system, the goal of a renewable energy economy is almost certainly beyond reach.

A Valuable Tool—If It Is Used Carefully
Author
Gregory Jaffe - Center for Science in the Public Interest
Center for Science in the Public Interest
Current Issue
Issue
6
Parent Article

Agricultural biotechnologies are important tools that can help mitigate and adapt to climate change, and improve nutrition security. However, the products of those technologies must be safe for humans and the environment and be utilized sustainably.

The first generation of engineered products—soybeans, corn, cotton, and canola resistant to herbicides or that produce their own pesticides—have been regulated by a system that can be described as “case by case.” Depending on the organism and the introduced trait, one, two, or three agencies (USDA, FDA, or EPA) review each individual product using existing laws to ensure it does not have an adverse impact on food safety and nutrition, agricultural interests, or the environment. Some federal oversight is mandatory (registration at EPA for pesticide-producing plants) while other procedures are voluntary (FDA’s oversight of biotech plants). Some procedures are transparent and allow for public input (USDA deregulation of engineered plants) while others are not (FDA’s approval of engineered animals as new animal drugs).

More importantly, the regulatory system focuses more on the requirements of the law being applied than the potential risks and impacts of the product. Recent changes to USDA’s oversight now remove large categories of products from oversight, but without the necessary scientific evidence to justify those exemptions.

The federal government needs to institute science-based and proportionate oversight that ensures biotech plants and animals are safe and do not adversely affect the environment. FDA’s oversight for biotech plants should be mandatory, and the agency needs to confirm that products are safe. USDA should not allow developers to self-determine whether their products meet one of the agency’s exempt categories and it should base any exemptions on the organism and introduced trait, not solely on the type of genetic change. FDA should establish proportionate regulatory procedures for animals with genomic changes so that it does not require the same degree of oversight for a gene-edited animal that introduces an existing gene from the same species (a dehorned cow) as it does for adding a gene from an unrelated species (salmon engineered to grow faster).

Engineered and edited products need to be used sustainably and provide benefits, both of which only can be determined case by case. While insect-resistant crops have been associated with significant reductions in chemical insecticidal sprays, overuse of Bt corn has led to resistant pest populations. Crops engineered to withstand herbicides (glyphosate-tolerant corn, cotton, and soybeans) have increased use of certain herbicides but also replaced others—for other examples, see the CSPI Report, “In the Weeds.” Depending on the crop and chemical, the net result of such substitutions can be increases in different herbicides but not necessarily increased toxicity.

Overall, glyphosate use has increased significantly, but the net result is lower acute toxicity for corn, cotton, and soybeans and increases in chronic toxicity for corn and cotton (with a reduction for soybeans). As has Bt corn, glyphosate overuse with herbicide-tolerant crops has led to resistant weeds, requiring farmers to go back to spraying chemicals that those crops were designed to eliminate.

EPA rightly requires farmers growing Bt crops to take steps to delay the development of resistant pests, but the agency should strengthen those requirements to address resistant insects that have developed. In addition, if a chemical will be sprayed on a herbicide-tolerant crop, EPA should impose conditions that delay development of resistant weeds.

With a regulatory system that is science-based, proportionate, transparent, and timely, genetically engineered and gene-edited products could more easily reach the market in the United States. Then advantageous traits such as drought tolerance, nitrogen fixation, and nutritional enhancement could impact the major food and agricultural challenges our country and world face.

A Bounty of Benefits
Author
Stanley Abramson - ArentFox Schiff
Karen Carr - ArentFox Schiff
ArentFox Schiff
ArentFox Schiff
Current Issue
Issue
6
Bounty of Benefits

Nearly 70 years have passed since the world was introduced to DNA, the molecule that encodes heredity. And it is 35 years since the first experiment with a genetically engineered organism in a strawberry patch in California. Since then, field tests with GE plants have been conducted 20,000 times in the United States, under the watchful eye of agencies acting under the Coordinated Framework for Regulation of Biotechnology. Over 200 GE food and agricultural products have been cleared for commercialization following review by one or more of the three agencies involved in the framework—the U.S. Department of Agriculture, the Food and Drug Administration, and the Environmental Protection Agency.

In contrast to most new technologies, opposition to the use of genetic engineering and calls for regulation developed well before any products were on the market or even tested in the open. Some in the expert community, including academics and NGO scientists, demanded to know more about the potential ecological effects of growing GE crops and potential health effects of consuming food from those crops. Even after science-based protocols were put in place, and premarket review regulations adopted under USDA, FDA, and EPA statutes to ensure GE products would be as safe to grow and eat as their conventionally bred counterparts, a number of public interest groups and European governments were still opposed. Some remain so still.

In the meantime, with GE crops grown and consumed globally since 1996 on 7 billion acres in up to 29 countries, there are unprecedented amounts of peer-reviewed safety data—and no evidence that GE crops or foods have caused any adverse health or environmental effects, nor has any court ever found that to be the case in spite of dozens of legal challenges. GE crops have allowed farmers to realize such benefits as higher yields (growing more food per acre), a significant reduction in pesticide application using insect-resistant crops coupled with a corresponding reduction in worker exposure in the field, and the ability to fight weeds well into the growing season with herbicide-tolerant crops. Newer plants with consumer and health benefits have begun to further diversify this mix. As a result, GE crops support sustainable development in numerous ways, including food security—providing a safe, nutritious, and affordable supply for all consumers—while contributing to a reduction in food waste and minimization of agriculture’s environmental footprint, importantly its climate impacts.

Under intensive regulatory, commercial, and academic oversight, and notwithstanding its widespread and rapid rate of adoption, biotechnology has produced benefits that have flowed to society without any evidence of adverse health or environmental effects. It is a fair question to ask how many other new technologies can point to such an enviable track record. However, biotechnology has not been without its skeptics.

The fears and concerns initially raised with genetic engineering were based largely on uncertainty and lack of experience at a point at which any GE products were still in the R&D stage and there was an absence of any significant educational effort regarding the underlying science. This was particularly true with respect to the novel use of recombinant DNA techniques, which allow genetic material to be joined from organisms that would not share their genes in nature. Unlike the well-recognized risks associated with certain existing products that gave rise to many of our health and environmental regulatory programs in the 20th century, any risks that might be associated with biotechnology were purely speculative and hypothetical.

Did the Coordinated Framework and the health and environmental statutes at its core help facilitate the unprecedented adoption of products of this new technology by the food and agriculture sectors? Without question. Was the lack of any evidence that these products have caused adverse health or environmental effects a key factor as well? Absolutely. Is it time to take a close look at the science and the experience gained over the past 35 years and adjust our regulatory oversight accordingly? Positively.

In 1990, FDA completed premarket review for the first GE food product under the Coordinated Framework. It cleared the path to commercialization for the first GE food ingredient, the chymosin enzyme, for use in cheese and other dairy products. Fast forward to 2019, when GE crops were grown commercially on over 176 million acres in the United States, with soybeans, corn, and cotton making up the bulk of these acres, followed by canola, sugar beets, alfalfa, potatoes, papaya, squash, and apples. In the same year, an estimated 17 million farmers planted GE crops on a total of 470.5 million acres. From 1996 to 2019, GE crops were grown worldwide on an aggregate 6.7 billion acres, providing food, feed, fuel, and shelter to a global population that reached 7.7 billion, with estimated economic benefits of over $225 billion.

Nobel Laureate Norman Borlaug believed that genetic engineering was the only way to increase food production in a world with rapidly growing population and disappearing arable land, and that GE organisms were not inherently dangerous because society has been genetically modifying organisms for a long time. The use of yeast microbes in baking and brewing as early as 6000 B.C. was the earliest practical use of genetics that we know of, followed by the centuries-old crossbreeding of plants and animals for desirable traits. But as Borlaug knew from his own research, crossbreeding could take decades before a useful new variety was created. Other breeding methods, used successfully since the 1950s to develop new crop varieties with chemicals and irradiation, also require multiple generations of plant selection and backcrossing. From the relative randomness of those techniques, many of which are still in use today, researchers have added the more recently developed molecular biology methods, referred to here as genetic engineering, which are far more precise and sophisticated, allowing scientists to develop and test new products safely and expeditiously.

To the extent that the regulatory processes put in place for GE products were able to allay the fears of the general public and scientific community by identifying and avoiding any potential hazards associated with the technology, the pre-implementation vantage point has been an advantage. But it has simultaneously been a burden because it requires decisionmaking in the early years in the face of a significant degree of uncertainty about both risks and benefits. Fortunately, that uncertainty motivated scientists and regulators to develop and utilize risk assessment techniques for evaluating the safety of GE products and risk management methods to address any concerns that may be identified, all of this prior to commercialization.

Looking back, it is easy to question the need for rigorous premarket review of many food and agricultural biotechnology products. At the outset, however, considerable political pressure was brought to bear on the government to do just that for all biotechnology products and particularly for microbes and other products that would be tested and ultimately put to work in the open environment. With the near unanimous support of the scientific community, the National Institutes of Health issued “Guidelines for Research Involving Recombinant DNA Molecules” in 1976, which rapidly established the de facto standard for recombinant DNA research in the public and private sectors.

Acting under those guidelines, NIH approved what would have been the first “deliberate release” experiment of a GE microbe in the open environment. The approval was challenged in federal court by the Foundation on Economic Trends, a nonprofit established by Jeremy Rifkin, an American economic and social theorist, writer, and activist, who took an early interest in biotechnology and was its primary, self-appointed watchdog for many years. The suit against NIH was the first of many to be brought by FOET and others.

Based on his finding that NIH had failed to meet its obligations under the new National Environmental Policy Act, Judge John Sirica enjoined both the experiment and NIH approval of any future deliberate-release experiments. On appeal, the injunction was affirmed as to the proposed experiment, but vacated as to NIH approval of future experiments. In an insightful concurring opinion with respect to scientific experimentation, public interest, and government oversight, Senior Circuit Judge George MacKinnon stated that he could understand how scientists knowledgeable in the field would approve the experiment, particularly when, in his view, “It would seem an experiment that releases into the environment organisms substantially the same as some already living there, and subject to the same naturally occurring controls, would present no risk.” He went on to say, however, that “the general public and those who have to pass on this action are not knowledgeable in this field and they are easily frightened by new scientific experiments and their possible consequences. It is such lay concerns that must here be satisfied by Environmental Assessments and Environmental Impact Statements,” under NEPA.

The injunction against NIH approval of this experiment on procedural grounds and subsequent challenges against EPA, albeit unsuccessful, signaled an abrupt end to any perceived honeymoon period for experiments in the environment, sent shockwaves through the burgeoning agricultural biotechnology research community, and caught the interest of many in the public-interest field as well. A report on the environmental implications of genetic engineering issued in 1984 by a House oversight subcommittee concluded that “the current regulatory framework does not guarantee that adequate consideration will be given to the potential environmental effects of a deliberate release” and recommended a moratorium. The Congressional Office of Technology Assessment warned of threats to the initial preeminence of U.S. biotechnology companies. Right on cue, draft biotechnology oversight legislation began to surface on Capitol Hill.

The growing public and political uneasiness with biotechnology research, including field tests of recombinant DNA organisms, and the inherent delays, costs, and unpredictability of litigation, were particularly concerning at a time when the R&D landscape had changed dramatically. Now, in addition to experiments being conducted in laboratories and greenhouses at numerous public and private research institutes, significant investments were being made by major corporations in the development of new biotechnology-derived products to be tested in the field. Fears of stifled innovation and a loss of the competitiveness of U.S. producers were raised at the highest levels of government and, in April 1984, the Reagan White House established an interagency working group to study and coordinate development of a regulatory policy.

When developers produce a new technology with applications in multiple different areas, it should come as no surprise that the authority to regulate products of that technology will rest with several overlapping government units. In the case of biotechnology, nine departments and eight agencies were tasked to undertake a top-to-bottom review and then develop recommendations for additional regulatory oversight, if warranted, while maintaining flexibility to accommodate new developments. Although both administrative and legislative actions were nominally on the table, there was a strong incentive to avoid any new law that might end up limiting progress rather than promoting it.

One of the key tasks in drafting what became the Coordinated Framework was to identify an existing statute that was best suited for regulation of each category of products for which biotechnology was being or could be applied. While acknowledging that the then-existing, product-based statutes were not drafted with biotechnology in mind, legal support for relying on those laws was based, at least in part, on Diamond v. Chakrabarty, a 1980 Supreme Court decision which upheld the patentability of a GE microorganism under the Patent Act—a law originally drafted by inventor Thomas Jefferson. The framework incorporated statutes that could address virtually every conceivable product category, although none had the pedigree of the Patent Act. The wisdom of using existing risk assessment statutes to review the safety of GE organisms would be recognized in 1987 when the National Academy of Sciences issued the first of several reports finding that any risks posed by such organisms were the “same in kind” as those associated with unmodified organisms and organisms modified by conventional means and, further, that the properties of a GE organism should be the focus of risk assessments, not the process used to produce the organism.

As the federal government wrestled with the challenge of how best to regulate biotechnology, it was confronted with two opposing schools of thought. Some promoted what would come to be associated with the Precautionary Principle, arguing that unless and until all questions and doubts about a new technology have been satisfactorily answered, it could not be trusted and had to be held in abeyance. Others argued for no new regulation based on the fact that GE techniques were simply an extension of conventional breeding. It was also argued that, even without new legislation, regulation could inhibit research and innovation, delay realization of significant societal benefits, and adversely impact American competitiveness.

In the end, the working group established by President Reagan took a middle ground. Products of biotechnology would be regulated based on existing safety standards and would be expected to be just as safe as their conventional counterparts. The public could be assured that a new fruit or vegetable product would be as safe to grow and produce and as safe and nutritious to eat as its conventional counterpart. This approach to regulation was applied regardless of the type of product (chemical, microbial, plant, or animal) or its intended use (agriculture, food, feed, fuel, forestry, medical, industrial, or consumer). With one notable exception, GE products intended for food and agricultural use would be subject to premarket review to the same extent and under the same standards as their conventional counterparts. The exception was USDA’s decision to review all GE organisms premarket based on a determination that they posed a potential “plant pest” risk. These fundamental concepts were incorporated when the White House issued the Coordinated Framework.

Regulation, of course, cannot remain static and, as a 2000 NAS report made clear, “Regulations should be considered flexible and open to change so that agencies can adapt readily to new information and improved understanding of the science that underlies regulatory decisions.” In this area, EPA, FDA, and USDA have each issued new or amended regulations, policy statements, or guidance documents when deemed appropriate. The agencies have also taken steps to identify individual products or categories of products that either no longer warrant premarket review or qualify for a reduced level of oversight based on experience. The key elements that allow agencies to make these determinations are familiarity with the product category and a history of safe use. Agencies have also moved to increase their oversight of certain product categories when warranted based on a review of product characteristics, exposure scenarios, and other data.

Regulation also has to be able to respond to new scientific developments and, for biotechnology, regulators must now address relatively new genome-editing techniques such as CRISPR-Cas9 that can be used to modify an organism’s DNA by insertion, deletion, or substitution of nucleotides at a specific site in the genome. EPA, FDA, and USDA have each taken preliminary steps to engage with the public and various stakeholders as part of the evaluation process for these new techniques. Just as recombinant DNA technology allows for valuable new traits such as disease resistance and enhanced yield to be added to a variety of plants and animals more rapidly and with greater precision than with conventional techniques, there is strong evidence that genome editing will dramatically improve breeding.

Given the anticipated benefits of genome editing in enabling scientists to tackle the spread of new pathogens, the need to feed a growing world population, and the adverse effects of climate change, the pressure to establish a clear, science-based path to commercialization will surely continue to mount. Once again, cautionary arguments have been made and voices have been raised in opposition. This time around, however, we are no longer at the dawn of the genetic engineering age. Scientists and regulators have a wealth of studies—and experience—to draw on in charting a path forward.

So what have we learned in over 45 years operating under the NIH Guidelines and over 35 years under the Coordinated Framework? Researchers developing GE food and agricultural products have carried out many thousands of controlled laboratory and greenhouse experiments and thousands more of controlled field trials without any reported harm to health, safety, or the environment. Hundreds of beneficial new GE products have successfully completed premarket review and are in widespread use, again without any evidence of having caused adverse effects. Notwithstanding the advanced state of the science and the enviable safety record for these products, court challenges against the regulatory agencies have continued over the past 35 years. Even in those few cases that succeeded, no court has ever found that a GE food or agricultural product was harmful.

Certainly, a legitimate argument can be made that, based on the science alone, there has been no demonstrated need for premarket review of most categories of biotechnology products. So, for example, the closer a GE product comes to its conventionally bred counterpart, the stronger that argument becomes. If the conventional product is regulated solely post-market, then the same should apply to a GE product that meets specified criteria. Like products should be treated the same under the law. This is particularly relevant for gene-editing applications where the resultant products are similar or indistinguishable from conventional counterparts.

Exemptions from premarket review will likely trigger public and political pushback given the puzzling persistence of anti-biotechnology sentiment in some quarters, which is all the more reason for transparency in the risk assessment process. The regulatory agencies have managed to thread this needle for decades and can be expected to continue to find a path forward that respects both the science and the nature of our democratic system of government—including the desire for transparency. Thus, as in the past, each agency should remain open to the identification of individual products or categories of products, regardless of the method of production, that either no longer warrant premarket review or qualify for a reduced level of oversight. While some have called for totally new models and types of regulation for biotechnology, that would almost certainly require authorizing legislation with its inherent risks to future scientific advances.

Perhaps the most persuasive remaining justification for continued premarket oversight is the need to increase public acceptance, particularly with regard to food safety, where some still harbor unfounded fears of effects on nutrition and health. Concerned citizens have not hesitated over the years to demonstrate against the technology, boycott producers, retailers, and restaurants that sell GE food products, and campaign for consumer choice. The message to the regulatory agencies from the continued legal challenges and public opposition seems clear. As Judge MacKinnon advised in 1985, there are “lay concerns that must here be satisfied.” Continued emphasis on public education and outreach through all available means with respect to biotechnology, including genome editing, may ultimately help turn the tide.

An encouraging step was recently taken to facilitate consumer choice by food and biotechnology industries and virtually all other stakeholders when agreement was reached on legislation to create a National Bioengineered Food Disclosure Standard. The statute, which had bipartisan support on Capitol Hill, was signed into law by President Obama in 2016, and directs USDA to establish a mandatory, uniform national disclosure standard for human food that is or may be bioengineered. USDA promulgated establishing regulations in 2018. Disclosure of bioengineered content in covered food products became mandatory through labeling or other approved means just this year, adding a useful counterpart to labeling standards under USDA’s National Organic Program.

While consumers acquaint themselves with disclosure under the new standard, one can certainly argue that it is time for USDA, EPA, and FDA to revisit their current premarket review programs with an eye toward using the extensive experience gained over the past 35 years and the enviable safety record of existing biotechnology products to identify appropriate, science-based opportunities for product exemptions and reduced premarket oversight. There is no need for new legislation. Each of the programs that cover food and agricultural products is science-based, and the governing statutes provide the authority to update policies, guidelines, and regulations, as needed, to reflect current scientific understanding and real-world experience.

Regulation exists to meet government’s responsibility toward society. At this time the federal government is faced with the need to meet several challenging health and environmental concerns that can be addressed using the techniques of modern biotechnology to develop valuable and, in some cases, desperately needed new products. A transparent, science-based regulatory process that recognizes the need for flexibility and the willingness to use it would best meet this objective. TEF

OPENING ARGUMENT Under intensive regulatory, commercial, and academic oversight, and notwithstanding its widespread and rapid rate of adoption, biotechnology has produced huge gains in well-being that have flowed to society without any evidence of adverse health or environmental effects.

No Longer a Major Question About the Court’s New Direction
Author
Bethany A. Davis Noll - NYU Law
NYU Law
Current Issue
Issue
5
Bethany A. Davis Noll

This past term, the Supreme Court had a chance to remake environmental law—and it took that opportunity. In West Virginia v. EPA, the Court decided whether a rule that the agency had promulgated during the Obama administration—aimed at reducing greenhouse gas emissions from power plants—was legal.

There were many twists and turns that got us to this point. The Clean Power Plan had used “generation shifting”—a common practice companies use to meet emissions standards—shifting generation from coal to gas, or gas to renewables. The Trump administration repealed and replaced the Obama-era regulation with the Affordable Clean Energy rule, in so doing asserting that generation shifting was unambiguously illegal.

But a day before Biden’s inauguration, the D.C. Circuit struck that Trump-era rule down, holding that this assertion was wrong. Rather than do anything to repeal the ACE Rule, EPA asked the D.C. Circuit to stay the mandate instead—a tactic that made sure that the Clean Power Plan did not spring back into life. Now under Democratic control, the agency did not appeal the loss, but intervenor states, led by West Virginia, did.

EPA’s decision not to propose and finalize a new rulemaking left the Supreme Court with an opportunity to grant that cert petition. On June 30, the last day of the term, the Supreme Court held that EPA did not have the authority to set the Clean Power Plan’s standards based on generation shifting.

It could have been much worse. The Court could have used the opportunity to tell EPA exactly how to interpret the statute to regulate greenhouse gas emissions—undermining the executive’s authority to make decisions and interpret statutes. It could have told EPA that regulating greenhouse gas emissions at all requires specific authorization from Congress—undermining a number of other greenhouse gas emissions rules. It could have said that Congress had not delegated that authority to EPA at all—threatening all of the regulatory state. The Court avoided this parade of horribles.

What it did instead was to explicitly adopt the major questions doctrine to hold that EPA lacked authority to use generation shifting. In other words, the Court determined the rule’s limits. It first adopted West Virginia’s characterization of the Clean Power Plan: as a rule that intended to remake the power sector by shifting states away from coal. It then held that any rule that sought to do something that ambitious was subject to the doctrine, which requires the agency to point to a clear statement granting it the authority to answer that question.

That doctrine has been percolating for some time, but this decision marks a new era. There is no real standard governing what constitutes a major question. It isn’t just that any new rule has to cost a lot. It could be that the policy is ambitious, the statute little-used, or the regulatory strategy new-ish or novel.

The doctrine is bound to come up in pretty much every regulatory and environmental case to come. An attorney general coalition, led by Texas AG Ken Paxton, has already argued in comments that a new policy banning asbestos is subject to the major questions doctrine. In writing for the majority, Justice Roberts describes the West Virginia petition as an extraordinary case. But it is hard to see that this doctrine will be at all limited.

There are two cases on the docket for the 2022-23 term that could bring about even more seismic changes. In Sackett v. EPA, petitioners are challenging a decision that wetlands on their property are subject to regulation under the Clean Water Act. The Sacketts hope to use the opportunity to convince the Court to restrict EPA’s jurisdiction severely under the Clean Water Act. They have argued that EPA’s jurisdiction extends only “to traditional navigable waters and intrastate navigable waters that link with other modes of transport to form interstate channels of commerce.”

In National Pork Producers Council v. Ross, petitioners are challenging a California proposition that requires pork products sold in the state to have been raised under certain conditions judged to be humane and healthier by the state. The producers have argued that California is improperly reaching beyond its borders to regulate pork production in other states, because only a small proportion of the country’s pork production is in California. But many states regulate the quality of products that can be consumed in state, from energy to food and beyond.

States’ rights doctrines and environmental law are here in conflict, and both changing in response, right before our eyes.

No Longer a Major Question About the Court’s New Direction.

Water Officer of the United States
Author
Akielly Hu - Environmental Law Institute
Environmental Law Institute
Current Issue
Issue
3
 Radhika Fox smiling at the camera and wearing a white blazer and black shirt

In a political climate marked by polarization and division, sometimes you need a tangible reminder of how interconnected we are. For Radhika Fox, this uniter comes in the shape of one of our most important, yet underappreciated resources: water. Even when we are unable to come to terms with our interdependence, the evidence is plain to see. “If there’s somebody upstream, there’s always somebody else downstream. That’s the nature of the water cycle,” says the country’s most senior water policy official.

A day-one appointee in the Biden administration, Fox was officially sworn in as assistant administrator for water at the U.S. Environmental Protection Agency on June 16, 2021. She is the first woman of color and the first person of Asian American heritage to ever hold the position—a historic moment for the water office.

It’s also “a historic moment for water,” as Fox says. Half a year after her confirmation, the bipartisan infrastructure law injected into the economy more than $50 billion for clean water—the greatest single federal investment in water in the nation’s history. As new infrastructure funds flow through the country, all eyes are on EPA and its Office of Water to tackle some of the most complex and important issues facing this vital resource.

On the most basic level, Fox and her team’s job is to keep the nation’s surface waters clean and its drinking water safe. These Herculean tasks require implementing an alphabet soup of regulations and statutes, most importantly the federal Clean Water Act and the Safe Drinking Water Act. Policymakers at the water office draw out rules to regulate the filling of wetlands and the discharge of pollutants, among other duties.

The water team is also taking on many of the nation’s most pernicious environmental injustices, including lead, per- and polyfluoroalkyl substances, or PFAS, and other toxics in drinking water. Fox has named these issues as top priorities for her tenure. “There’s nothing more fundamental and more essential than equity in the context of water management,” she says.

“I mean, just think about your day, right? You can’t get through your day without access to clean, safe water, whether that’s having that cup of coffee or a glass of water, or being able to provide safe water for your children. Unfortunately, millions of people in this country and all around the world don’t have that fundamental, basic security,” Fox says.

Crises like the lead poisoning in Flint and Benton Harbor, Michigan, have made clear the urgent need for improved, equitable water management. Exposure to lead, particularly in drinking water, disproportionately affects low-income communities and communities of color. The contaminant impairs neural development in children and causes greater risk of kidney failure and stroke, among other health conditions. According to the White House, lead pipes run through an estimated 6 to 10 million homes, plus another 400,000 schools and child care centers.

Early on in his presidency, President Biden announced a goal to replace all lead service lines in the United States. The directive relies heavily on the Office of Water’s regulatory muscles, particularly when it comes to tightening protections under the Lead and Copper Rule, a Safe Drinking Water Act-related regulation published by EPA to limit these substances.

LCR has spun through a turnstile of revisions over the years. In December 2021, EPA announced the office would develop new revisions to the rule, to make the regulation more protective than its current version. “We had a huge, robust public engagement process last year with communities who are on the front lines of the lead crisis, tribal nations, co-regulators, and national associations,” Fox says of the upcoming rule.

In an earlier E&E News interview, Fox clarified that these roundtables would help close a crucial gap in understanding. “We know historically that we haven’t really considered enough the way in which the Lead and Copper Rule impacts people of color,” she said. EPA expects to finalize the new revisions by October 2024.

Concurrently, the water office is moving forward on policies to limit PFAS. These so-called “forever chemicals” are found in the bodies of virtually all Americans, and are associated with cancer, immune disorders, and developmental issues, among a host of other health harms. Fox co-chairs the EPA Council on PFAS, along with the agency’s Region 1 Deputy Regional Administrator Deborah Szaro. The group coordinates agency-wide efforts on PFAS according to a timeline in the council’s strategic roadmap. Targeted actions include regulating PFAS under the SDWA and minimizing chemical discharges in wastewater.

Although many of these rulemakings are still in progress, Fox says she is already “incredibly honored to be in this role as assistant administrator for water to continue the journey toward water equity and justice.” Growing up, her understanding of the disparities between communities and countries developed on an intuitive level, rather than as a conscious awakening.

“My commitment to equity and justice and opportunity for all comes from my upbringing, first and foremost. I am the child of immigrants who came to this country searching for economic opportunity, and I very much stand on their shoulders,” she says.

Fox’s parents grew up in rural India, and her grandparents worked as small farmers, growing rice, lemons, and bananas. Water was essential to her family’s agricultural livelihood. At the same time, their village lacked tap water or flush toilets; the family relied on drinking wells and pit latrines.

Her family regularly traveled back to her grandparents’ village during the summers of her childhood, an experience that allowed her to “see how infrastructure, especially water systems that we didn’t have at my grandmother’s village, can create these communities of opportunity.”

“I think equity is deeply ingrained in who I am because of my background, and from recognizing that the opportunities afforded to you are often random—like from whom and where you were born. I have always felt a desire to give back because I have been given so much opportunity by my family,” she says.

As an undergraduate student at Columbia University, Fox volunteered in Harlem and “saw firsthand how there are so many systems and structures that afford opportunity to some, but not to others.” The experience affirmed a lifelong commitment to equity and justice. Even in her earlier work on infrastructure, housing, and transportation, environmental justice served as “a thread through all of those experiences.”

Fox describes her career trajectory as “grounded in infrastructure.” After more than a decade as the federal policy director at PolicyLink, a research institute dedicated to racial and economic equity, she joined the San Francisco Public Utilities Commission as director of policy and government affairs, helping to provide water and wastewater services to more than 2.6 million Bay Area residents.

“What drew me to the SFPUC was their infrastructure work. The biggest tributary to infrastructure investment in San Francisco is actually the water department. People don’t realize that, so although I went there for infrastructure, I fell in love with working on all kinds of water issues,” Fox says.

She continued to make her mark on the water world as CEO of the national nonprofit U.S. Water Alliance. There her stature began to be generally acknowledged. “Radhika Fox is a significant figure in the water sector—a woman with tremendous respect and standing in the community,” says Tracy Mehan, former assistant administrator for water under the George W. Bush administration. At the Water Alliance, Fox worked to find common ground between water utilities, businesses, nonprofits, and other water sector stakeholders for more than five years before joining EPA.

What sets Fox apart from other bureaucrats is a grounding of policies in the lived, on-the-ground experience of everyday people. Fox has frequently mentioned in public statements that her team’s policymaking will be guided by a principle of “listening to all sides to find enduring solutions.” The philosophy has been a “through line” in her career, and will be put to the ultimate test for reaching consensus on one of water’s most contentious policy issues—the Waters of the United States rule.

Under written law, Clean Water Act jurisdiction extends to any area designated as “Waters of the United States.” What exactly these include is something policymakers have failed to achieve consensus on since the 1980s. A confusing definition of WOTUS jeopardizes the ability of governments at every level to protect the nation’s waters, for a simple reason—whatever doesn’t count, doesn’t get regulated under the federal law.

One sticking point is whether ephemeral or intermittent streams should be covered. The issue has significant implications for the arid Southwest, where water levels tend to fluctuate much more than other areas of the United States. Relentless back-and-forth between administrations, and court decisions over the decades that have introduced even more confusion, has left a patchwork of jurisdictional definitions operating in the country.

Fox describes the last decade of the WOTUS debacle as a “constant ping-pong.” In 2015, the Obama administration issued a Clean Water Rule to define WOTUS. That definition was later rescinded by the Trump administration and replaced with the Navigable Waters Protection Rule in 2020, a regulation that High Country News said would potentially “exclude as many as 94 percent of Arizona’s and 66 percent of California’s streams and rivers from federal oversight, depending on how regulators interpret it.” The rule was eventually vacated by a federal district court in Arizona in 2021.

Overhauling WOTUS is a focal point for Fox’s tenure in the Office of Water. In her Senate confirmation hearing, Fox affirmed, “Administrator [Michael] Regan and I want an enduring definition of Waters of the U.S., one that can withstand administration changes.” So far, the agency has initiated a two-part rulemaking process that first restores a version of the pre-2015 WOTUS definition. Next, the office will establish a brand new definition, potentially settling the matter once and for all.

Establishing a lasting rule won’t be easy. In a podcast interview with Fox, David Ross, the former assistant administrator for water under the Trump administration, delivers brief advice that sounds more like an inside joke: “I’m just going to say: ‘Good luck.’”

Fox and her team have committed to a system of robust public engagement to guide the office’s decisionmaking. The process involves a series of stakeholder meetings and 10 regional roundtables to be held over the coming spring and summer. Roundtable discussions will include representatives from water and wastewater service providers, agriculture, environmental justice communities, tribal nations, and state and local governments, among other groups.

“It’s an issue where there is so much division. What we have been focused on is: how do we get to a durable definition of waters of the United States, one that tries to balance the very diverse perspectives that have a stake in this definition? I believe that we’re not going to be able to do that unless we listen to all sides,” she says.

Believing that different sides can reach consensus feels radical nowadays, especially in a country with as entrenched social and political chasms as the United States. But Fox believes in the power of hearing from someone you may have never otherwise crossed paths with—and she’s seen it in practice.

In her first year as CEO of the U.S. Water Alliance, Fox helped create the Water Equity Network, a program that guides utilities in building equitable water systems. The idea was borne from a desire to act on the severe human health issues faced in the Flint water crisis and beyond, as well as lessons learned at the San Francisco Public Utilities Commission.

“My experience at the SFPUC proved that water agencies can be community anchor institutions. We were the first utility in the nation to adopt a community benefits and environmental justice policy, and I saw how water agencies are fundamental to the solution,” she says. The organization invited cities like Atlanta, Buffalo, Cleveland, Camden, Milwaukee, and Pittsburgh, among others to participate. It gathered water agencies, local officials, and frontline community organizations most impacted by water-related challenges—including lead, contaminated water, PFAS, and flooding—and forced everybody to listen to all sides.

“The water managers—these technical leaders—heard firsthand from people whose water had been shut off. They learned what that meant for them, and what that meant for their children. These were people who had their basements flooded, and were just living in conditions that no one should have to live in,” she recounts. “The water managers heard directly from those communities, and in turn, the communities heard about the constraints that water managers face. There were so many breakthrough solutions that happened because we created a space for a deliberate, thoughtful airing of all of the issues.”

To Fox, public engagement is not just a box to be checked off. She believes that centering these lived experiences strengthens decisionmaking in a substantial way. “When we listen to all sides—when we embrace the complexity of the issues that we’re tackling in the water sector—we can actually reach better outcomes because of that listening. It leads to a different set of solutions,” she says. “That’s why this principle is so foundational to how I think about the work that I do every day."

Fox's leadership, woven with a philosophy that aligns with the Biden administration’s investments in environmental justice, comes at an opportune time. Yet the choice is deliberate: diversity within the country’s top political officials is a minimum requirement for more representative, people-first policies. As Fox puts it in a Politico interview, “I think selecting somebody like me—frankly, as a woman of color in this leadership role—is also part of the Biden-Harris commitment to building a federal team that reflects the diversity of this nation.”

In the coming years, the Office of Water’s responsibilities will only grow, particularly when it comes to ensuring that new funds under the bipartisan infrastructure law go to those who need it most. About 85 percent of those funds will flow through State Revolving Funds, or SRFs, the main channel for distributing money for water infrastructure and projects. The infrastructure law mandates that 49 percent of this money must go to disadvantaged communities as grants and forgivable loans. But what exactly constitutes a disadvantaged community is under the discretion of states.

On March 8, the water office released a 56-page memo to state SRF program managers and EPA regional water division directors to provide guidance on stewarding these funds and clarify responsibilities states have to disadvantaged communities. “The memo encourages states to look at their definition of disadvantaged communities to make sure it’s consistent with statutes, and provides guidance on preferred factors that should be considered when making the investments in disadvantaged communities,” Fox says.

“The water sector can and must do better to steer all kinds of investments, whether it’s the bipartisan infrastructure law money or other infrastructure funding programs, to these communities,” she says. “With so much money on the table, and so many challenges that we see around the country, I think this is the moment to meet the needs of all communities.”

She emphasizes that the memo is only the first step in EPA’s work to ensure that the historic investments in water don’t leave anyone behind. “One exciting thing that is coming later this year is a very robust technical assistance strategy to help disadvantaged communities build their technical, financial, and managerial capacity to receive these funds. We’re quite excited to work with states, tribes, and territories in that next phase.”

This year marks the 50th anniversary of the Clean Water Act, a law passed during a time when rivers caught on fire from unchecked pollution. Fox says there is still much work to be done. Today, many of the most insidious water issues are invisible, even though their effects may not be. Millions of Americans depend on the work of the Office of Water and its ongoing rulemakings. The stakes are high, and so is the pressure on Fox’s team.

Nonetheless, Fox’s optimism remains grounded in the importance of this work, and the power of water to connect us.

“I think that one of our foundational principles as a nation should be to recognize that water is essential to everyone—to every business, to every community, to every person, and to use that as our north star as we develop future policies.” TEF

PROFILE EPA Assistant Administrator Radhika Fox speaks on her journey to water, the historic infrastructure law investments, and her team’s approach to managing the country’s most essential resource.

False Promise of Cost-Benefit Analysis
Author
Amy Sinden - Temple University School of Law
Temple University School of Law
Current Issue
Issue
2
Parent Article
Amy Sinden headshot

Making decisions is hard in environmental policy. It requires grappling with controversial value choices, complex systems, and vast uncertainties about future outcomes. Perhaps it’s no surprise then that the promise of cost-benefit analysis—the idea that we can just plug numbers into a mathematical formula that will spit out objectively determined, welfare-maximizing public policy prescriptions—is almost irresistibly alluring.

But it’s a false promise. EPA regulates literally hundreds of pollutants that we know cause serious harm to human health and the environment. But knowing something is one thing; being able to quantify it is another. For the vast majority of these pollutants, the agency simply doesn’t have the fine-grained data necessary to put a dollar figure on the benefits of controlling them. These data gaps are so pervasive that most of the time, they prevent EPA’s cost-benefit analyses from monetizing whole categories of benefits the agency itself views as significant. A study I published in 2019 showed that happening in 80 percent of the agency’s major rulemakings issued between 2002 and 2015. Indeed, the problem is so severe that in many instances, EPA is entirely unable to quantify any of the impacts associated with the pollutants a regulation is designed to control.

And all of this is to say nothing of values—like dignity, equity, or human suffering—that resist quantification altogether.

If a CBA can’t put a dollar figure on all the significant categories of benefit, it can’t calculate net benefits. And if it can’t calculate net benefits, formal CBA really doesn’t tell you much. It certainly can’t lead you to the Shangri-La of net benefits maximization promised by its proponents. The result is that the CBA requirement effectively ends up imposing a burden of proof on agencies that is in many instances insurmountable, putting a chilling effect on the implementation of regulatory safeguards. EPA personnel are afraid to propose rules with unquantifiable benefits that prevent the cost-benefit math from coming out right—for fear of reprimand by the bean counters at the White House’s Office of Information and Regulatory Affairs. Under a set of executive orders dating back to President Reagan, that little-known but powerful office houses a small group of economists responsible for ensuring that federal regulations pass the CBA test.

It was worries about precisely this kind of dynamic that led Congress to largely avoid formal CBA in crafting the statutes from which most of our biggest and most contentious environmental regulations originate. Instead, lawmakers came up with a lot of creative ways to make sure costs are kept in check and are not disproportionate to benefits, without requiring them to be directly weighed against each other. In this way, they avoid the need to express regulatory benefits—things like saving lives or preventing neurological damage to kids—in monetary terms.

These are the scrappy, street-smart tools of regulatory decisionmaking—tools like feasibility analysis, cost-effectiveness analysis, and multi-factor balancing. In contexts in which significant benefits (or costs) can’t be quantified, these tools can often provide a more useful framework for rational decisionmaking. And while they may look less elegant in theory, they have a proven track record of actually reducing pollution levels in the real world. But the current hyper-formalistic approach to CBA that has become de rigueur under the regulatory review executive orders is often in tension with these statutory requirements.

In reforming the regulatory review process, President Biden should resist the false allure of CBA and instead reaffirm the primacy of federal agencies and their statutory mandates in regulatory decision-making. He should dispense with the CBA mandate, directing the agencies to instead use the context-specific methods set out in their authorizing statutes for considering the costs and benefits of regulations. These tools are pragmatic, effective, and tailored to specific contexts and information constraints—designed to take advantage of the information we have rather than the information we wish we had.

Add Progress, Stability to Policymaking
Author
Caroline Cecot - Antonin Scalia Law School at George Mason University
Antonin Scalia Law School at George Mason University
Current Issue
Issue
2
Parent Article
headshot of Caroline Cecot

The Trump administration’s biggest actions were often deregulatory—rescinding or modifying the prior administration’s recently issued rules. These moves frequently targeted the Obama administration’s flagship environmental protections, including the Clean Power Plan, the Waters of the United States Rule, and its groundbreaking vehicle fuel economy and greenhouse gas standards—all of which, according to their cost-benefit analyses, were expected to provide hundreds of millions of net monetized benefits each year.

Thankfully, in some of these cases, courts blocked the Trump actions, at least in part based on the administration’s shoddy reasoning for moving away from CBA-justified policies. But if the commitment to CBA and what it represents is abandoned, there will be no protection from such regulatory swings in our increasingly polarized society.

At its core, a commitment to CBA is a commitment to evidence-backed policies. The tool is meant to be a neutral aide to decisionmaking, helping highlight moves from the status quo that are net socially beneficial based on available evidence. If there’s no economic or scientific evidence to support a move away from the status quo (in either direction), then CBA will not help justify the move. In such cases, federal agencies could pursue their objectives without CBA’s support—as they often do. But if there is solid evidence to support a move, a CBA will provide a strong justification to an agency advancing such an action. The resulting policy will be more resilient, especially against a future administration with different priorities.

In Trump’s efforts to roll back Obama-era regulations, for example, the new administration was most successful when prior regulations were not supported by relatively complete CBAs, as was the case for the Hydraulic Fracturing on Federal and Indian Lands Rule. But it was least successful when prior regulations were strongly CBA-justified, such as fuel economy and greenhouse gas standards.

No one thinks CBA, as currently practiced, is perfect. Given incomplete data and underlying scientific uncertainty, CBAs today cannot produce one number to unequivocally direct policies. Instead, they often point to a range of expected values of different courses of action. And admittedly, benefits to the environment are not always easily converted into the monetary values that make CBA most useful—though great strides have been made in doing this, such as valuing the negative consequences of exposure to particulate matter and the accumulation of greenhouse gases.

Moreover, the effort to monetize benefits has sometimes revealed them to be more valuable than initially thought. Examples include the use of the Value of Statistical Life to assess mortality-risk reductions, the Reagan administration’s decision to pursue a stricter standard for phasing out lead in gasoline, and the value of additional reductions in particulate matter emissions below the cost-blind National Ambient Air Quality Standard. But, most importantly, CBA is still the best available tool for advancing sensible and resilient policies to address our most pressing environmental problems.

Pro-regulatory and anti-regulatory advocates both push for less analysis to impose their preferred policies more easily. They attack CBA simultaneously for being easy to manipulate (by the other side), anti-regulatory or pro-regulatory (as relevant), not transparent, and persistently net costly for some groups—eroding decades of bipartisan consensus around the use of the tool. But they typically fail to acknowledge that their preferred alternatives all perform worse by these same measures.

And, simply put, those who value efforts to protect the environment have more to lose in a regulatory dynamic where policy swings from one administration to the next. Many issues that are particularly important, such as seriously tackling the threat of climate change, involve sustained commitments over a long time horizon in order to realize benefits. The focus should be on fostering commitments to welfare-enhancing policies and generating the necessary evidence to obtain bipartisan buy-in. This work is difficult, no doubt, but necessary.

Staying Within the Guardrails
Author
Daniel Farber - UC Berkeley
UC Berkeley
Current Issue
Issue
2
Staying Within the Guardrails

The scholars Michael Livermore and Richard Revesz have been among the most important voices in the legal academy supporting the use of cost-benefit analysis in decisionmaking. They have argued for years that CBA can provide a foundation for robust, protective environmental, health, and safety regulations. Their latest book, Reviving Rationality: Saving Cost-Benefit Analysis for the Sake of the Environment and Our Health, continues to make that argument. But their faith in the future of CBA seems to have been deeply shaken by the Trump presidency. The book thus expresses a sense of crisis about the methodology’s future.

The term cost-benefit analysis is sometimes used to mean any comparison of pros and cons, which is something we all do intuitively for important decisions in ordinary life. For instance, whether to pay extra for a more fuel-efficient furnace in anticipation of lower monthly heating bills—you can even add greater comfort into the equation. For present purposes, though, CBA goes beyond that: it means a very rigorous way of evaluating almost any proposed action by balancing the pros and cons, using economic analysis to quantify all the costs and benefits of an action, even those that are not at first glance economic. Basically, everything gets converted into dollar equivalents in this process, even such concepts as health or security. And problematically, unlike when a homeowner decides to buy a more efficient furnace, the costs are usually born by one set of parties and the benefits accrued to another.

The practical significance of CBA stems largely from presidential efforts to centralize review of proposed regulations in the White House. Although some laws explicitly require the practice or at least a full balancing of costs and benefits, those laws are few. Nevertheless, for the past forty years, starting with Ronald Reagan’s Executive Order 12291, issued in 1981, presidents have ordered agencies to make cost-benefit analysis a major part of their decisionmaking.

Many on the left have long viewed cost-benefit analysis with suspicion, seeing it as inherently biased against regulations needed to protect the public and the environment. Frank Ackerman and Lisa Heinzerling’s 2004 book Priceless: On Knowing the Price of Everything and the Value of Nothing provides a classic critique. The title of the book itself expresses their skepticism about whether environmental values can be reduced to monetary terms. They also viewed CBA as a stealth attack on regulations. As they put it, “cost-benefit analysis promotes a deregulatory agenda under the cover of scientific objectivity.” Progressives’ increased emphasis on environmental justice further undercuts CBA’s appeal, as the practice seems to be unhelpful to impacted communities.

While in the past CBA has been touted as the gold standard for rational regulation, it seems doubtful that it retains the political support to effectively play that role in the future. It may still be valuable, however, as a safeguard against regulations whose costs and benefits are too far out of alignment. If it cannot provide the “right answer” to a regulatory issue, CBA can at least indicate whether a proposed solution is outside the zone of reasonableness. To borrow a phrase from Livermore and Revesz, it can provide an important guardrail for the regulatory process.

To make the case for this more modest role for cost-benefit analysis, I will begin with a brief dive into how it actually works, followed by a quick review of its history and implementation under Presidents Obama and Trump. That will bring us to the question of how, in our increasingly polarized polity, CBA can best contribute to the regulatory process.

The debate over cost-benefit analysis cannot really be understood without some sense of how it actually works. There are typically financial costs for industry on one side of the balance. Those are easy to measure, at least in principle. However, environmental benefits, such as improvements in public health, must somehow be converted to monetary terms to be compared with the costs. Economists have developed a variety of methods for monetizing benefits. For instance, they calculate a regulation’s lifesaving benefits by assigning a monetary value to each death and multiplying that by the number of deaths prevented by the regulation. The monetary amount is called the “value of a statistical life.”

They determine this amount by studying employment statistics in occupations with different levels of risk, and asking how much you have to pay workers in exchange for submitting to a higher level of danger. According to some leading studies, you have to pay workers an extra $10,000 per year to accept a job where their risk of death increases by a tenth of a percent. Because $10,000 is a tenth of a percent of $10 million, an economist could say therefore that the statistical value of life is $10 million. (Actually, this is really just another way of saying how much of a pay cut workers are willing to take in exchange for a safer workplace in a different job. But that’s another story.)

Historically, the White House has told agencies to count all regulatory benefits in their calculations. This has become controversial recently. A regulation may have side-benefits that don’t directly relate to the purpose of the regulation. For instance, EPA has required pollution controls that dramatically slash emissions of carbon monoxide from vehicles. As Livermore and Revesz explained in an earlier book, Retaking Rationality, it turns out that one major benefit of those air pollution regulations is that it is now very difficult to commit suicide by gassing oneself in a car. Some would-be suicides simply use another method, but apparently some simply give up on the idea. Economists as well as the White House guidelines would count the reduction in suicides as a benefit of the regulation. Conservatives argue that these incidental benefits (often called co-benefits) shouldn’t count. A recent example is the dispute over EPA regulations of methane, a potent greenhouse gas. Although targeted at preventing climate change, the regulations would also have the co-benefits of reducing ozone, fine particulates, and hazardous air pollutants not directly targeted by EPA’s regulation.

Here’s a quick, very rough example of how cost-benefit analysis works in a public health context: whether to mandate COVID vaccinations. The benefits of vaccination at any given time depend on how widespread COVID is. Just to use some specific figures: As of November, there were roughly seven COVID deaths on average per 100,000 unvaccinated people and only 0.5 deaths among fully vaccinated ones. (It’s not clear how the omicron variant will impact the numbers; at this writing, there are still many unknowns.) In other words, if you had fully vaccinated 100,000 people at that point, you could expect to save about seven lives. Given the value of $10 million that EPA assigns per life, vaccination would have benefits of at least $70 million for that population. (The “at least” is because of other benefits that I haven’t tried to include, such as reductions in hospitalizations and slowing the spread of the disease to other people.) Two doses of Pfizer vaccine cost the federal government about forty dollars, but let’s round up to a hundred to account for the costs of transporting, storing, and administering the vaccine. The cost for vaccinating 100,000 people then comes to $10 million, compared to $70 million in benefits. So, according to this very rough calculation, the benefits of vaccination outweighed the cost by seven to one. This should be an appealing revelation from a public perspective, and cause resisters to line up to get the jab. But millions of people find these benefits unappealing or outweighed by other concerns, including politics.

The debate over cost-benefit analysis is now at least forty years old, going back to Ronald Reagan’s executive order. Environmentalists were sharply critical of the EO, which was associated with Reagan’s campaign to “get the government off the backs of the people.” To the surprise of some observers, however, more regulation-friendly presidents like Clinton and Obama tinkered with Reagan’s order but left its core intact. A 1993 Clinton executive order, numbered 12866, has continued to be the major basis for cost-benefit analysis, with a few tweaks from later presidents. The George W. Bush administration seemed, at least to environmentalists, like a replay of the Reagan years, with cost-benefit analysis serving to once again undercut regulatory initiatives.

The Obama administration showed, however, that cost-benefit analysis could be used to favor progressive regulation, as Livermore and Revesz have long argued it could. In the area of climate change, the administration’s cost-benefit analysis showed that restrictions on fossil fuel use produced major health benefits, particularly by reducing dangerous fine particulates. Obama also introduced the use of the social cost of carbon in cost-benefit analysis. The social cost of carbon is an estimate of the harm produced by adding a ton of carbon to the atmosphere. After an intensive literature review and additional economic modeling by an expert task force, the administration used CBA to justify ambitious fuel efficiency standards for cars, limits on carbon emissions from power plants, and a host of other environmental measures.

Many critics on the left thought, however, that even under regulation-friendly Obama, cost-benefit analysis resulted in weaker regulations and bureaucratic foot-dragging. They were particularly critical of the president’s first appointee as regulatory czar, law professor Cass Sunstein. They blamed Sunstein for delays in the issuance of regulations, weakening regulations proposed by EPA, and killing tightened restrictions on ozone pollution. Sunstein himself would point instead to his role in approving Obama’s climate change regulations.

Meanwhile, Obama’s use of cost-benefit analysis disenchanted many on the right, who came to see CBA as an inadequate safeguard against government overreach. The surprising result, under the Trump administration, was a willingness to short-circuit CBA in order to justify aggressive deregulation. In a 2019 article, I took a close look at the role of CBA in the last administration’s first two years. The conclusion was clear: cost-benefit analysis under Trump was an afterthought at best. As did other observers, I found that the administration was far more interested in regulatory costs than in regulatory benefits. One signal of Trump’s indifference to CBA was the selection of a young, previously unknown lawyer with two years of government experience as Trump’s second regulatory czar.

While not rescinding existing orders, Trump gave short shrift to the benefit side of CBA. Within two weeks of taking office, he issued Executive Order 13771. This order required that the costs of any new regulation be offset by cost-savings from repealing existing regulations. Notably, regulatory benefits were not considered, only costs. It also required that agencies eliminate at least two regulations for each new regulation. Obviously, a balanced appraisal of costs and benefits was considered insufficient to push the massive deregulation that Trump was seeking.

The most glaring example of the Trump administration’s willingness to play fast and loose with cost-benefit analysis involved fuel efficiency standards for vehicles. Trump was eager to freeze the scheduled tightening of the standards put in place by Obama, even though the car manufacturers vocally supported them. In the view of outside economists, the initial version of the Trump era cost-benefit analysis did not even pass the laugh test. For example, the CBA claimed that as a result of loosening the standards, “the overall size of the vehicle fleet falls even though new vehicle prices are lower.” Products rarely become less popular as a result of price cuts. “On its face,” experts said on the Resources for the Future web site, “this is inconsistent with economics.” Even the economists whose work the government relied on denounced the CBA. In a December 2018 article in the journal Science, they concluded that “the 2018 analysis has fundamental flaws and inconsistencies, is at odds with basic economic theory and empirical studies, is misleading, and does not improve estimates of costs and benefits of fuel economy standards beyond those in the 2016 analysis.”

One of Trump’s priorities was eliminating climate change regulations. Part of this campaign involved slashing estimates of the impact of greenhouse gases. As mentioned earlier, the Obama administration had convened an expert task force to estimate the harm done by adding one additional ton of carbon dioxide to the atmosphere. The task force also estimated the social cost of another potent greenhouse gas, methane — that is, the harm done by each additional ton of methane emitted. The Trump administration took aim at both of these estimates and came up with numbers that were only a fraction of the Obama estimates. Since climate change is a global problem, Obama’s regulators had considered the global impacts of climate change, but the Trump team considered only direct impacts within the United States. As a result, the estimate of the social cost of methane was slashed by more than 95 percent.

The Trump estimate was resoundingly rejected as arbitrary and capricious by a federal district court in 2020. The court noted that the Obama estimate “resulted from an interagency team of experts developed through years of public comment and peer review,” whereas the Trump estimate “was developed in months without any public comment or peer review.” The court also observed that “focusing solely on domestic effects has been soundly rejected by economists as improper and unsupported by science.” And in terms of effects on U.S. citizens, the new estimate “ignores impacts on 8 million United States citizens living abroad, including thousands of United States military personnel; billions of dollars of physical assets owned by United States companies abroad; United States companies impacted by their trading partners and suppliers abroad; and global migration and geopolitical security.”

The court concluded by saying that “an agency simply cannot construct a model that confirms a preordained outcome while ignoring a model that reflects the best science available.” The litigation was put on hold after the election, leaving it unclear how the Ninth Circuit court of appeals would have ruled. The district court’s analysis, however, clearly delineates the gap between the Trump approach and that of mainstream economists—showing once again how little the administration really cared about CBA. Nor, apparently, did other Republican political figures. So far as I’m aware, not a single Republican member of Congress complained about the administration’s shoddy economic analysis of regulation after regulation.

Given shrinking support on both ends of the political spectrum, cost-benefit analysis is unlikely to serve as the litmus test for regulations in the future. That will be disappointing to some CBA advocates. They may need to recalibrate their goals. Livermore and Revesz themselves use the term “guardrail” in describing the role of cost-benefit analysis. George W. Bush and Barack Obama had very different approaches to regulation. If, as Livermore and Revesz maintain, CBA accommodates both approaches, there’s obviously considerable maneuvering room between the guardrails. The choice seems to be whether to acknowledge the limited role of CBA as a guardrail or abandon it for other alternatives.

One alternative would involve eliminating the analyses but keeping the information that goes into them. For example, in preparing an analysis of a new regulation under the Clean Air Act, EPA assembles and assesses the scientific information relating to the risk posed by a pollutant. This may involve both the use of existing scientific studies and of agency models to determine how the pollutant would spread. EPA then models how a new regulation would affect pollution levels and the resulting risks. Those effects provide a basis for estimating the benefit of the regulation in reducing hospitalization and mortality. On the other side, regulators attempt to determine how industry would comply and to estimate the costs of compliance. This discussion contains information that we’d really like to know regardless of whether we’re interested in a monetized CBA.

In the absence of a cost-benefit analysis, key agency findings could be displayed on a standardized online dashboard. The dashboard would provide information such as estimates of the severity of the risk being regulated; projections of compliance costs; quantifiable benefits of the regulation; impact of the regulation on social inequality; unquantifiable benefits; impact on jobs, etc. This is all information that the public as well as the ultimate decisionmakers should take into account.

The appeal of the dashboard approach is that it doesn’t try to squeeze a complicated policy decision into a rigid monetized calculation. It also offers the opportunity to provide information about the uncertainties surrounding many policy decisions. For instance, scientists may not really have much confidence in any precise quantitative estimate of the risk posed by a single chemical, let alone the potential harm of climate change. The dashboard would avoid the need to come up with a specific number and allow fuller communication of the range of reasonable estimates.

What would be lost with this approach is a standardized metric for comparing costs and benefits. Monetization may seem artificial and reductionist, but it does provide useful guidance. Even for those who don’t believe that you can put a dollar value on risks to human lives, it may be useful to provide benchmarks for the tradeoffs accompanying a regulation. A cost-benefit analysis essentially does that by comparing regulatory tradeoffs with the tradeoffs that people make in terms of their personal risks in different occupations.

Estimates of the social cost of carbon provide an illustration of the pitfalls and utility of quantification. There are many uncertainties involved in calculating the social cost of carbon. Some involve the climate models themselves, others involve projections about how well society will adapt to climate change and about the economic cost of any remaining impacts. There is also considerable dispute about how to compare impacts decades or more in the future with costs incurred today.

Yet the estimate of the social cost of carbon does play a valuable role. It provides a guidepost about what costs are justifiable and a mechanism for ensuring some consistency across the many different regulatory arenas where climate change impacts are important. It also provides a gauge of whether market-based mechanisms are setting a reasonable price on carbon emissions. Something important would be lost if we substitute a dashboard of qualitative information for this quantified estimate.

Another advantage of cost-benefit analysis lies in the fact that it involves a standardized methodology backed up by the professional norms of economists. This standardization makes CBA useful for decisionmakers needing a metric to compare actions across different agencies or across presidential administrations. It also provides some constraints on the ability of different presidential administrations to swing regulatory policy in opposing directions.

The guardrail approach seems consistent with what the Supreme Court has had to say about cost-benefit analysis. While CBA is founded on presidential mandates, the courts have also had something to say about it. In Michigan v. EPA, decided in 2015, the issue was whether EPA needed to consider costs in deciding whether to regulate toxic emissions from power plants. The statutory language was quite open-ended and said nothing directly bearing on the question. The Court split five to four, but there was a surprising amount of agreement on one fundamental issue: At some point in the regulatory process, EPA would need to consider how the costs of regulation compared with its benefits. Justice Scalia’s majority opinion said that “no regulation is ‘appropriate’ if it does significantly more harm than good.” Justice Kagan’s dissent agreed that “(absent contrary indication from Congress) an agency must take costs into account in some manner before imposing significant regulatory burdens.” The dissent also agreed (quoting a previous Scalia opinion) that ignoring costs would create a risk that the agency would “impose massive costs far in excess of any benefit.” In that earlier case, Entergy Corp. v. Riverkeeper, Inc., the issue was whether EPA had erred in considering the costs as well as the benefits of requiring power plants to use closed-system cooling in order to minimize impacts on waterbodies. Justice Scalia had said it was a “reasonable and hence legitimate exercise of its discretion to weigh benefits against costs,” for EPA to consider whether costs were “significantly greater than” benefits. While less than a full-throated endorsement of CBA, the Court clearly seemed favorably inclined to using it in some form as a way to ensure against unreasonable regulatory tradeoffs.

Indeed, there is a case to be made for retaining cost-benefit analysis as part of the decisionmaking process. It is a pipedream, however, to imagine that CBA will ever be able to boil down all the information relevant to policymakers into a single definitive number. The monetized results will never be able to include everything relevant to social policy and will always incorporate judgment calls about uncertain parameters.

Revesz and Livermore refer to cost-benefit analysis as providing guardrails. It seems to me that this is much more defensible than the stronger claim that CBA equates with good social policy. It provides much more room for political disagreement about what values government should pursue. It is also a claim that comes closer, at least in my view, to what CBA can actually deliver, given the data gaps, modeling difficulties, and unquantifiable benefits that are an inevitable part of the enterprise. Rather than being a litmus test, CBA should function as a way of giving policymakers the best available information about the positive and negative effects of a regulation.

As we have seen, political support for use of cost-benefit analysis as the yardstick for regulators has dwindled. The theoretical and legal arguments for that kind of reliance on cost-benefit analysis are also shaky. Nevertheless, CBA does retain some advantages: as a discipline to ensure full articulation of regulatory impacts; as a rough metric for comparisons across agencies and administrations; and as a guidepost for regulatory reasonableness. Forty years of history show that while this is surely a more modest claim than some advocates of cost-benefit analysis have made, it is also a more defensible one. TEF

COVER STORY Costs, benefits, risk, and equity are but a few of the inputs into a “dashboard approach,” allowing more intelligent rulemaking and avoiding regulatory clunkers

The Agency’s Proposed “Science Transparency” Rule Is Opaque
Author
David P. Clarke - Writer and Editor
Writer and Editor
Current Issue
Issue
6
David P. Clarke

When Acting Administrator Andrew Wheeler gave his first speech to EPA staff on July 11, he stated that clear, consistent risk communication — which he accurately described as an activity “that goes to the heart of EPA’s mission” — was his top priority. That being the case, Wheeler would do well to carefully review the many skeptical comments from risk experts and the scientific community on the agency’s proposal for “strengthening transparency in regulatory science,” a reality check that would reasonably lead him to either radically revamp or abandon the flimsy proposal.

Although the scientific community broadly understands the need for transparency and the ability to replicate data that support risk conclusions, for many commenters, the bottom line is that the proposal isn’t transparent enough.

That’s true not only for the proposal’s many critics but even for the American Chemistry Council, the national chemical industry trade group, which supports the proposal. According to ACC, “key regulatory definitions and regulatory text” in the proposal aren’t clear enough, the preamble needs clarification, and the proposal doesn’t always properly identify the sources of statutory authority it cites, among other issues.

For the Society of Toxicology, whose work is vital to the chemical risk assessments that Wheeler and EPA would communicate, the proposed rule “is too simplistic.” Among other issues, SOT says, an independent scientific body — not the EPA administrator — should decide whether studies whose data are not publicly available would be valid and valuable for a regulatory decision. Rejecting the proposal’s notion that data should be invalidated solely on the basis of public availability, SOT cautions that excluding studies conducted before electronic storage “would invalidate hundreds of thousands of studies” that are extremely important for chemical risk assessments for tens of thousands of chemicals.

Comments from Harvard Law School signed by almost 100 experts from top hospitals and public health organizations note that EPA’s own guidelines require the best available science in all risk assessments and warn that the proposal would “cripple EPA’s ability” to implement major environmental laws by excluding “for no rational reason” many valid studies. And, citing a litany of complex risk- and science-related issues raised by the proposal, the presidents of the National Academies of Sciences, Engineering, and Medicine write: “Much more clarity is required.”

Echoing numerous other comments, the Defense Department notes that it is “improbable” EPA will always obtain a study’s underlying data, but that shouldn’t prevent the use of “otherwise high-quality studies.”

William Farland, who served in senior scientific advisory roles during his 27-year career at the agency, notes that since the 1990s critics have alleged that EPA uses “secret science” to support regulations, allegations that have motivated the transparency rule. But, despite a long history of such claims, none have withstood scrutiny.

EPA needs to provide more information on how decisions will be made about studies that can and can’t be used, Farland says, citing the National Academies’ call for an objective, independent scientific review process to evaluate individual studies. In a process going well beyond the administrator’s simply giving an exemption, EPA could systematically review research, using published review criteria, to determine if studies should be used. The rule speaks to the critical issue of when data can be used to support dose-response analysis, but “there needs to be more detail” on the specific uses of such studies before EPA’s risk community could implement the proposal, Farland adds.

Wheeler’s commitment to better risk communication notwithstanding, communication “will continue to be an issue,” Farland says. Agency scientists conduct detailed risk analyses, but regulatory programs say “just give me the number,” an issue EPA has struggled with over the years. Prior to deciding on how to regulate coal-plant mercury, Administrator Mike Leavitt — who served from 2003–05 — at times spent six hours a week learning from agency scientists about the toxin’s risk, a process that enabled good risk communication. Likewise, Wheeler needs to delve deeply into the science supporting regulatory decisions if he wants to accomplish good risk communication, Farland says.

Going forward, Wheeler will have to decide how he’ll respond to an EPA Science Advisory Board request to review the rule. In a June 28 letter, the board commented that the rule’s design “appears to have been developed without a public process” for soliciting the scientific community’s input, although the proposal entails numerous important science issues.

Ultimately, Wheeler can’t separate risk communication from risk science, but for a start he can clarify the agency’s proposed solutions to what may be a non-problem.

The Agency’s proposed “science transparency” rule is opaque.

Reconstruct an Administrative Agency
Author
Joseph Goffman
Current Issue
Issue
6
Reconstruct an Administrative Agency

BE GRATEFUL TO EVERYONE.” Buddhists in Tibet say this to remind themselves that adversity offers a path to enlightenment. In that spirit, this is an overdue thank you note to former administrator Scott Pruitt, for reminding us what EPA is for. His efforts to roll back a host of air, water, and waste rules have forced us to recognize the extent to which those regulations reduce pollution and protect the environment.

First, a measurement of his tenure’s impact. David Cutler and Francesca Dominici, two public health experts at Harvard University, recently published a column in the Journal of the American Medical Association quantifying the impact on human lives and health as the critical metric for the stakes of the Pruitt EPA’s deregulatory agenda. They pegged the number of Americans facing premature death over the next decade at an additional 80,000, thanks to his regulatory rollbacks. And that may be a conservative estimate.

It gets worse. Beyond highly publicized changes to pollution standards, Pruitt took on a less visible, if more destructive, project. Like King Arthur confronting the Black Knight in Monty Python and the Holy Grail, Pruitt has lopped off the agency’s critical limbs, disabling its capacities and diminishing its public health agenda. He attacked the very mechanisms EPA relies on to create and enforce pollution rules. But unlike the Black Knight, the professionals who work at the agency and their pollution-fighting colleagues in industry, NGOs, law firms, and state governments are not ignorant of the agency’s diminished abilities.

EPA is an agency built to fill a variety of roles, all rooted in scientific, analytic, and technical expertise. That expertise is intended to benefit the public, by informing rulemakings and guidelines for states and businesses to follow to reduce harmful pollution. It was, with some exceptions, also used to ensure compliance with those rules and enforce them. EPA carried all this out under a mandate to keep the trust of a well-informed public and to be accountable to that public.

In his short tenure at the agency, Pruitt moved against almost all of these critical functions, from the way the agency evaluates and applies science, to how it collects information, fosters compliance, and pursues enforcement. Even the way EPA assesses the public health benefits of reducing pollution fell into Pruitt’s destructive path. Oblivious to the agency’s protective mission, he seized upon former Trump advisor Steve Bannon’s call to “deconstruct the administrative state” and made it his literal-minded mission to curtail, if not shut down, his branch of the targeted organism.

After Pruitt resigned in July, Deputy Administrator Andrew Wheeler became acting and the frontrunner for the permanent job. Wheeler continues to follow the same deregulatory agenda outlined by the president and enacted by Pruitt, but with a different leadership style. The divergence between Wheeler and Pruitt may be visible in the extent to which the new chief seems to follow administrative procedure and other standard processes, in contrast with the often corner-cutting work Pruitt’s EPA produced. Gone, too, thanks to Pruitt’s departure and Wheeler’s apparent probity, are the innumerable scandals that swirled around the former’s personal conduct and spending. Wheeler’s seasoning thanks to time spent as an EPA career lawyer and senior staffer on the Senate Environment and Public Works Committee seems to count for something.

The Environmental Protection Agency wasn’t created by a comprehensive, organic statute laying out its mission and functions. The executive order forming EPA focused simply on rehousing under one roof a variety of research, monitoring, standard-setting, and enforcement activities that were previously spread across the federal government. Rather, it is in the substantive statutes — the Clean Air Act, Clean Water Act, Resource Conservation and Recovery Act, Toxic Substances Control Act, and Superfund, to name the most important — that we can find the agency’s purpose. It is through these laws that Congress asserted its role and built the agency by assigning it specific tasks. Over time EPA acquired or expanded the functions needed to perform those tasks. Pruitt took aim not just at these protective rules but also at the usually dovetailed capacities to promulgate them and to carry them out.

In this article, we will take as an example the Clean Air Act to shed light on the functions the agency has needed to master to do its job — and how Pruitt has sought to hamper EPA’s smooth operation. Enacted in its modern version in 1970 and extensively amended in 1977 and 1990, the CAA offers a good look at the de facto blueprint Congress followed in building EPA. Congress made the law’s overriding purposes clear: to enhance air quality for the sake of public health, welfare, and productivity; to promote research and development in service of pollution control; and to provide financial assistance to states and localities in support of anti-pollution programs. Where Congress did its concrete agency-construction, though, was in charging EPA with numerous building-block tasks. These include setting ambient air quality standards to protect human health, determining how best to achieve emissions reductions, establishing technology-based standards for industry, setting tailpipe pollution standards, performing risk and technology reviews for toxic air pollutants, equitably allocating pollution-control responsibilities among local sources in polluted areas along with upwind sources, and considering cost and available technology for many of these jobs.

The standards and requirements that these tasks produce are directions to states and businesses to take the actions needed to reduce pollution. Public health benefits depend, in turn, on governments and firms complying with those directions and achieving reductions. Congress assigned EPA the task of ensuring, via compliance assurance or enforcement, that they do so.

The task list thus demands an agency that possesses expertise in relevant sciences — notably public health, epidemiology, and biomechanics, along with atmospheric chemistry and physics — as well as engineering, technology, and economics. The job list also demands competency in detection, monitoring, and information-gathering in support of EPA’s obligation to ensure compliance with pollution limits.

The authors of the CAA were not done with tasks, however. Grasping the progressive nature of science and technology, and the dynamics of a market-based economy, Congress committed the agency to continual, open-ended improvement, requiring EPA to ensure that progress is reflected in the level of protection delivered to the public. Thus, the statute mandates that the agency review health-based standards every five years and technology-based standards every eight years — and to change them if new information compels. Congress considered this ongoing cycle of tasks so vital that it authorized any member of the public to sue EPA for failing to meet these deadlines, and the courts to order the agency to meet a schedule for completing the reviews and rulemakings in each case.

This last task, thus, is accountability, which falls as much to the public and the courts as to EPA. This accountability ethic Congress established is in addition to the formal accountability created by the Administrative Procedure Act and sections 307(b) and (d) of the Clean Air Act.

The Environmental Protection Agency has historically relied on the best available peer-reviewed science in carrying out its mission. With his “Strengthening Transparency in Regulatory Science” proposal, Pruitt sought to restrict the science EPA will consider. The proposal effectively excludes two gold-standard public health studies, by the American Cancer Society and Harvard University, that show the health threats and increased mortality from particulate pollution, which kills or harms more Americans than any other form of pollution.

The proposal made by Pruitt in April would bar the agency from considering scientific studies unless the raw data were made publicly available. The proposal offered a barely coherent explanation for why the data-availability requirement is needed for studies that had already undergone peer review and the other quality-assurance processes of state-of-the-art science. What is clear, however, is that the proposal attacks the foundational ACS and Harvard studies, because both rely on a large body of confidential patient data that legally cannot be made public.

These thoroughly reanalyzed and replicated studies, long relied on by the world’s leading researchers, have also long been relied on by EPA. The result of this policy will be to hamstring the National Ambient Air Quality Standards program that drives air pollution controls and the agency’s mandatory work in evaluating the costs and benefits of reducing pollution. It’s easy to understate benefits when doing these calculations, especially if the benefits of reduced mortality and illness as assessed through confidential surveys are excluded.

Because science is central to so many of the agency’s tasks, EPA has long since absorbed the fundamental principles of scientific inquiry. None is more vital than that of following the data and analysis to where they lead rather than leading the data and analysis to a predetermined destination. In advancing a “science” proposal so clearly designed to deliver a preferred result — excluding studies that support the case for regulating particle pollution — Pruitt committed what remains a cardinal sin outside the administration: corrupting the scientific method and suborning it to a pre-ordained agenda.

The science advisory panels EPA has relied on, through both Democratic and Republican administrations, have been objective, independent, highly qualified, disinterested, and rarely, if ever, legitimately questioned. But Pruitt has purged the panels, ushering out independent academic experts and replacing them with scientists affiliated with the very firms under regulation. By one count, during Pruitt’s tenure the proportion of leading academics on the main Science Advisory Board fell from 79 percent to 50 percent and the proportion of industry-employed scientists rose from 6 percent to 23 percent.

In an October 2017 directive, Pruitt decreed that no one will be allowed to serve as an advisor who has received a grant from the agency — a condition that mostly affects academic experts who routinely receive government funding for research. Never mind that at least one federal appeals court has already found that “working for or receiving a grant from [an agency], or coauthoring a paper with a person affiliated with the department, does not impair a scientist’s ability to provide technical, scientific peer review of a study sponsored by . . . one of its agencies.” Meanwhile, Pruitt’s novel theory of “independence” features no such exclusion for experts working for industry even if their firm is regulated by the agency. Nor does it offer any explanation of what it is about being affiliated with a corporation that demonstrates a scientist’s independence.

Now the SAB and other reconstituted science panels will guide EPA on important decisions like health-based ambient air quality standards, determinations of acceptable risk levels for exposure to toxic chemicals, assessments of the net carbon impact of burning biomass, studies of the impact of hydraulic fracturing on drinking water, and how to value human life for purposes of economic analysis.

If anything in the CAA is sacrosanct it’s the requirement that EPA set NAAQS exclusively on the basis of science. Both the statutory language of Section 7409(b) and a unanimous Supreme Court decision exclude other considerations, even those of cost and feasibility. Once new standards are set, importantly, the action-based provisions of the act are set in motion to reduce air pollution, and these provisions put cost and technical feasibility front and center. As the Court found, NAAQS are based on answering only the question, What air quality is the science telling us is safe for human health?

But in May, Pruitt issued a memo that threatens to undermine the integrity of the standard-setting process. Until then, the various steps EPA followed to propose and then, after public comment, issue final NAAQS were carefully phased. The phasing ensured that the NAAQS-setting process focused exclusively on the science of human health, and was insulated from other considerations. The Pruitt process collapses steps so that the Clean Air Act Science Advisory Committee and the agency itself will be compelled to review science, cost, technology, and implementation together in a single step, not separately. That is obviously contrary to what the Court has decreed is the legislative purpose of the NAAQS process.

Another essential principle EPA must follow in carrying out its tasks is accurately assessing the public health benefits of pollution reduction. Since at least October 2017, the agency has engaged in a coordinated series of attacks on how the benefits of pollution reduction are defined and quantified. For Pruitt, and now Wheeler, denying health benefits and changing how they are weighed in cost-benefit analyses helps clear the path to deregulation and inaction. For an agency dedicated to carrying out the tasks assigned to it under the CAA, embracing an “all-seeing” ethic is essential. If EPA applies analytic tools that blind it to the benefits of reducing harmful pollutants, then it need not take further action to cut pollution.

The Affordable Clean Energy and Clean Power Plan repeal proposals include Regulatory Impact Analyses featuring the domestic (but not global) benefits of reducing carbon dioxide emissions. Repeal model runs accounting for particulate reduction benefits show the repeal as unjustified by benefit-cost analysis. One repeal RIA run, however, zeroes out the value of particulate pollution-reduction benefits of the CPP if they would have occurred in areas already meeting ambient air quality standards. It’s the only run where benefit-cost analysis justifies the repeal.

The premise of the analysis was that reducing pollution beyond the NAAQS had no beneficial effect and thus no value, even though major studies — designed to discern the realities of public health — contradict the zero-benefit premise. Most recently, for example, “Association of Short-term Exposure to Air Pollution With Mortality in Older Adults” in the Journal of the American Medical Association shows — as did the Harvard study now besieged by the “secret science” proposal — that fine particle pollution, even in concentrations below the current NAAQS, drive up mortality across the country. But if the answer to every question has already been decided as “no new regulation,” then imposing devices so as not to see these results and the reality they reveal is vital. Again, this negative example reinforces the fact that public health protection depends on EPA’s commitment to rigorous and open-ended scientific inquiry.

Fortunately, Congress did not leave the public entirely defenseless in the face of an untrustworthy EPA. Via notice-and-comment rulemaking, the right to petition the agency for reconsideration, and the right to petition the courts for review of rules, citizens have a fairly robust set of tools by which to hold the agency accountable for meeting its obligations. Congress reinforced the seriousness of EPA’s requirement to review health and technology standards by giving the public the right to enforce the obligation in federal court if the agency misses a deadline. Historically, EPA has effectively embraced and facilitated its accountability by working with litigants to resolve these lawsuits via settlements establishing mutually agreed upon schedules and accepting in those settlements the complaining party’s statutory right to collect attorney’s fees.

In an October 2017 directive, however, Pruitt added a set of new obstacles to the public’s effort to exercise that right. Under the directive, citizen litigants hoping to reach agreement with EPA on deadlines will face both a set of new procedural hurdles and a playing field tilted in favor of regulated businesses. Contrary to past practice, they will be at much greater risk of having to foot their own legal bills, even if they ultimately succeed in reaching settlement with the agency.

Pruitt’s rationale for the directive was so lacking in foundation that more than fifty retired career EPA attorneys issued an extensive public rebuttal of its assertions, noting that the directive makes inflammatory and evidence-free allegations about “collusion” between government attorneys and litigants and ignores a recent Government Accountability Office report that found no basis for those claims.

One of the hurdles to settlement the directive introduces also creates an entirely new advantage for industry by requiring business sign-off before the agency agrees to a settlement. While the rationale Pruitt offered for the directive was hazy, the intended effect is crystal clear: to make it harder for the public to hold the agency accountable by making it that much more unlikely that actions to enforce EPA deadlines will be resolved by settlement, rather than costly litigation.

To deliver on its purpose of protecting the public from pollution, the Clean Air Act requires the agency to ensure that polluters reduce emissions and discharges. To do that, EPA can engage directly with sources to offer assistance and, if that fails, bring enforcement actions. Pruitt cut back on one of the pillars of these useful activities. The agency’s ability to collect reliable and timely information from polluters both assures compliance and enables enforcement actions when needed. While most firms are committed to staying in compliance, that commitment is strengthened if they can count on EPA to gather the information needed to ensure that their competitors are also in compliance, leveling the playing field.

EPA’s nationwide network of 10 regional offices and subsidiary offices has historically been the cornerstone of information-gathering. They were authorized to request information as part of their frontline responsibility for identifying environmental non-compliance and even environmental crimes associated with waste dumping and illegal levels of pollution, and enforcing the environmental laws.

But Pruitt required regional personnel to clear each information request through headquarters. The immediate effect of this directive is delay as well as inefficiency, since requests must now go through an additional layer of review by headquarters employees, operating hundreds or even thousands of miles away from sites subject to information requests, with less knowledge of the facts underlying the requests than that of their regional counterparts.

Longer term, requiring centralized review of information requests leaves the process open to political influence from which EPA’s compliance and enforcement activities have been rigorously shielded in the past. This policy has already led to fewer requests for information and slower enforcement actions, and the Pruitt EPA is underperforming previous administrations’ collection of civil penalties from rule-violating polluters.

Another example of Pruitt’s indifference, at best, to enforcement is his actions against the CAA’s New Source Review program, which has played a crucial role in protecting local airsheds when firms expand their operations or build new facilities. Robust enforcement has been instrumental to the success of the program. Until recently, EPA worked to ensure that polluters estimate potential emissions increases accurately, since those estimates were the first step in applying the NSR program’s pollution-control tools. Businesses have an incentive to underestimate the emissions impacts of new projects in order to reduce the amount of control equipment they will need to install. EPA has countered that incentive by scrutinizing those estimates and enforcing against inaccurate emissions projections. Courts have repeatedly upheld EPA’s right to scrutinize industry estimates of air pollution increases.

In December 2017, however, Pruitt adopted what amounts to a non-enforcement policy: the agency now will accept firms’ estimates, and not scrutinize the accuracy of emissions projections, or the performance of new projects. This policy surrenders to industry a position that EPA itself secured in a recent case in the Sixth Circuit upholding the agency’s authority to double-check emissions estimates themselves.

Since Congress gradually assigned the agency its duties and responsibilities in the statutes it is required to implement, EPA has taken shape and evolved into an agency that must incorporate science to comply with its statutory obligations. It has also grown to rely on its regional offices in order to act in accordance with the cooperative federalism structure set forth in our environmental statutes and to have the information it needs to assure, and when necessary enforce, compliance. The Trump EPA, first under Scott Pruitt and now under Acting Administrator Andrew Wheeler, constrains its own capacities to take action in areas crucial to its mission and intended functions. In some areas, it even shifts toward becoming focused no longer on process but on results — anathema to any expert agency subject to the Administrative Procedure Act’s requirements, as interpreted by the courts.

Wheeler has shown early signs of changing course from Pruitt’s way of doing things. Wheeler, for example, withdrew the No Action Assurance letter for the annual manufacturing cap on high-polluting “glider trucks,” which Pruitt issued as his last day’s act of defiance against a remaining Obama-era regulation. Although the withdrawal came after the D.C. Circuit had taken the unusual step of issuing a stay against the No Action Assurance — an indication of how inappropriate the NAA was in the first place — the withdrawal is an indication that Wheeler is willing to observe proper boundaries. This refinement in technique sets up an intriguing plot line going forward. On the one hand, a more faithfully followed rulemaking process is likely to compel Wheeler to account for data and analysis inimical to rolling back existing protections and remaining inactive in the face of new understandings of environmental threats. On the other is the administration’s unswerving commitment to across-the-board deregulation.

Case in point: during the summer, the New York Times reported that Wheeler raised questions in internal deliberations about safety data the National Highway Traffic and Safety Administration used to support the proposed rollback of Corporate Average Fuel Economy standards, slowing down the issuance of the proposal. On the one hand, NHTSA and EPA did issue the proposal, along with rollbacks in tailpipe carbon dioxide emissions standards. On the other, Wheeler’s intervention may have nudged the proposal toward a more honest accounting of the issues.

Faithfully followed, the rulemaking process is a stern taskmaster that demands intellectual honesty. Is it too much to hope that Wheeler will remain true to those dictates, even if they point to a path that leads him away from implacable deregulation? Wheeler-specific optimism aside, it remains the case that Pruitt’s deconstruction agenda will remain one of his legacies. Despite his intent, however, Pruitt created a reverse blueprint for rebuilding EPA’s dismantled capacities, His handiwork has also reminded us just how essential a properly functioning EPA is to public health and environmental protection. TEF

COVER STORY ❧ By examining the structures that Scott Pruitt dismantled during his tenure at EPA, the agency’s mission comes into focus and a blueprint for rebuilding its functions is revealed — if successor Andrew Wheeler likes the shape intended by the pollution statutes’ drafters.