Introduction
American government, at every level, regulates a dizzyingly broad swath of social and economic life. Regulatory policy determines the drugs we can buy, the pollutants in the air we breathe and the water we drink, the speed we can drive, the materials builders use to construct our homes, the cars we buy, and so much more.
In making decisions about regulations, public officials must choose which areas of our lives merit government rules, as well as how stringent those rules should be. For example, the federal government first decided to regulate airborne particulate matter in 1970 and has tightened these regulations twice since then. Simultaneously, the government has had to decide whether and how to regulate hundreds of other air pollutants and other hazards. These choices have been further complicated by the fact the distributional impacts of some pollutants are spread unevenly across the population (e.g., they may differ by region, income, or race). At the same time, policymakers have had to grapple with the economic impacts of proposed environmental rules on manufacturers and other polluters. The essence of regulation is that it requires the regulated to take actions that they would not otherwise take, actions that often increase their costs, reduce their utility, or in some other way harm them.
When faced with this incredible array of complex and often uncertain trade-offs, what is a well-intentioned government to do?
The only humane solution to this enduring dilemma lies with careful and rigorous cost-benefit analysis.