Risk & Progress| A hub for essays that explore risk, human progress, and your potential. My mission is to educate, inspire, and invest in concepts that promote a better future for all. Subscriptions and new essays are free and always will be. Paid subscribers gain access to the full archives.
The FDA, it could be argued, is responsible for tens of thousands of deaths every year. Such a statement may sound outlandish or hyperbolic, but the case can be made. In the latter half of the 20th Century, the relentless expansion of regulation and bureaucracy was justified by specious claims of reducing safety and environmental risks. However, some level of risk is necessary for progress to occur, and progress itself reduces long-term risk to humanity. Our attempt to build a perfectly safe world made of soft pillows suffocates progress in the process.
Deadly Risk Aversion
The FDA was originally created to prevent unscrupulous food and drug producers from hawking products that were unsafe, if downright deadly. The overarching goal was to prevent harm to the public. But as these things tend to go, its power gradually expanded to become, arguably, the very thing it was intended to prevent.
In 1906, Congress passed the Food and Drug Act, creating the precursor to the modern FDA. Importantly, at that time, the new agency had no power to review or approve new drugs before they went to the market. It merely had police powers to enforce the law after a violation had taken place, and the law primarily was concerned with truth in advertising/labeling.
This changed in the late 1930s when a company called S. E. Massengill began marketing one of the first antibiotics, known as “Elixir Sulfanilamide.” In its liquid form, the drug was prepared with a solution of diethylene glycol, a toxic chemical akin to antifreeze. Many died, not from the drug itself, but from the preparation solution.
In response, Congress passed the 1938 Food, Drug, and Cosmetic Act, which catapulted the FDA into a regulator with the power to review and approve drugs before they could be sold. Crucially, however, the law gave the FDA just 60 days to vet drugs. Should the timeline not be met, it would automatically be approved. Additionally, the law required that the FDA verify only the safety, not the efficacy, of pharmaceuticals.
Then, in the late 1950s, came another disaster, this time from Europe; Thalidomide. The drug, marketed as a sleep and morning sickness aid, was deemed safe by regulators in Europe, despite not actually having been tested on pregnant women. The drug turned out to be very toxic to unborn babies, leading to tens of thousands of birth defects and deaths.
Thalidomide had not been approved in the US, but the disaster compelled Congress to increase the power of the FDA further. In 1962, Congress passed the 1962 Kefauver–Harris Amendment, which required the FDA to verify that drugs were both safe and effective, it also lengthened the drug review period to 180 days, and eliminated the automatic approval provision.
If you think about it, this was an odd response. Thalidomide’s issue was safety, not efficacy. The FDA had prudently blocked the drug on the grounds of insufficient safety data. Why exactly was Congress compelled to take additional action? Politics, no doubt. Such illustrates the problem with bureaucracies; they slowly and often irrationally, expand the scope of their authority over time.
FDA reviews have only grown more difficult and lengthy since then, with delays that suppress and stifle many life-saving drugs and medical devices, killing tens of thousands in the process and/or subjecting them to needless suffering. The FDA approval process has become so arduous that many drug formulations and devices are never funded at all.
Indeed, the FDA is one culprit behind “Eroom’s Law”, which states that the cost of developing new drugs doubles every nine years on average. It is also one factor explaining why drugs and healthcare are so much more expensive in the US than in other countries of the world.
Other Pathways to Safety
But don’t we need this arduous process to verify drug safety and efficacy? Not necessarily. Foreign regulators routinely approve drugs far more quickly than the FDA does, and even in the US, FDA-approved drugs are often prescribed by doctors for uses that they were not tested for. This so-called “off-label” usage is 100 percent legal and common for people with severe illnesses.
Other industries function quite well without comparable regulatory authority. In electronics, for example, manufacturers can voluntarily submit their products to Underwriters’ Laboratories and obtain their “UL” mark. This is not a legal requirement, but many retailers require this certification for sale.
In addition, the tort system and insurers who must insure product risks, naturally hold companies to account. When someone is harmed by a drug or medical device, the company’s reputation is tarnished, they are dragged through court, and often have to pay significant damages to victims.
The combination of private certification, courts, insurers, and doctors, ensures that products are largely safe. Of course, there will always be some level of risk. Even with today’s expansive FDA approval process, thousands die every year from FDA-approved products. While this is tragic, it misses the other side of the equation.
As the FDA approval process became more involved over time, economists Sam Peltzman and Dale Gieringer argue, far more people have needlessly died and/or suffered because the drugs or medical devices that they needed to live were either too expensive or not available when they needed them.
For example, by 1988 it was well known that taking aspirin reduces the risk of myocardial occlusion. But for many years, the FDA prohibited aspirin manufacturers from advertising this fact. How many people died as a consequence? This is risk aversion gone too far.
The Spread of Risk Aversion
Risk aversion has spread upstream to research itself. Institutional review boards (IRBs) are tasked with reviewing proposed research methods to ensure the safety of participants. IRBs worked well until 1998 when, during an asthma study, a patient unfortunately died. Again overreaction ensued.
The consequence is that IRBs now exemplify risk aversion gone haywire. In one study that sought to test the transfer of bacteria on the skin, the IRB consent form warned participants of AIDS risk…even though one cannot possibly contract AIDS through the skin. It also warned of a risk of contracting smallpox, a disease that has been eradicated since the 1970s.
While a single death of a participant is truly tragic, the cost-benefit calculation of overprotection is more so. Tightening oversight of research may save one life, but overburdensome risk aversion makes life-saving research immensely more difficult and costly, indirectly causing the deaths of thousands more.
The same phenomenon has played out with NEPA, the National Environmental Policy Act. The purpose of NEPA was to protect the environment by giving a voice to environmentalists. NEPA requires federal agencies to assess the environmental impacts of their actions and gives environmentalists the power to sue to hold agencies accountable.
The law requires federal agencies to produce a “detailed statement” of the environmental effects of any “major action.” In the 1970s, those assessments typically ran under ten pages in length. But because NEPA created an avenue for lawsuits, each new lawsuit set a precedent for the next, expanding the scope of NEPA far beyond its original intention.
Today NEPA reviews are typically hundreds of pages in length. The mean preparation time for an Environmental Impact Statement was a staggering 4.8 years in 2020. What began as an effort to curtail risks to the environment, has evolved into a tool that actually blocks and delays projects that reduce human impact on the environment, like wind farms, solar power stations, and clean natural gas facilities, and their connections to the grid. NEPA now protects the status quo, the polluting coal and oil energy infrastructure, by making it very difficult to build anything else.
It is quite clear that the Precautionary Principle, our natural human fear of loss, has enabled risk aversion to routinely prevail over progress in recent decades. This, of course, does not mean that we ought to abolish all regulators and regulations. Some regulations are beneficial and useful. But a brighter 21st Century is going to need to address the challenge of bureaucratic bloat and risk aversion more generally.
This means making it easier to roll back regulations that are “non-functioning” or counterproductive. Perhaps, as I suggested here, the creation of an independent panel to review all existing regulations, with the power to modify or delete those that are non-functioning. Whatever approach is preferred, a proper balance between risk and progress must urgently be found.
You also may like…
I basically agree with your thrust.
Conceptually, however the issue is the way risk aversion is arbitrarily scattered around. It makes perfect sense to me for regulators to value a human life or QALY higher now than in 1900 or 19040 pr even 1970. What does NOT make sense is to value a life lost to nuclear power plant melt down to be valued more than a life lost to coal fire electricity plant emission or the life lost to the side effect of a vaccine more than the life lost from the delay in approving a vaccine while the first life loss was assessed.
Another important distinction between regulations are those that are performance based versus those that are process based.
The Clean Air Act and Clean Water Act are performance based. There is an evidence based process at EPA to set acceptable pollution levels based on best available science. This involves direct open debates about health impacts and cost benefits. Then from those standards permitting systems are developed to reduce pollution below the limits.
Process based regulations like NEPA are completely different. They have no objective, agency defined standards. They are supposed to be public disclosure documents but decades of lawsuits make them ever larger and larger.
For the FDA how much of the growth in expenses for approval is tighter standards and how much is more elaborate processes? I see the IRB craziness as a process step gone awry.