Risk & Progress| A hub for essays that explore risk, human progress, and your potential. My mission is to educate, inspire, and invest in concepts that promote a better future for all. Subscriptions are free, paid subscribers gain access to the full archive, including the Pathways of Progress essay series.
We can make the case that the FDA, the government department tasked with ensuring the safety of food and drugs, is responsible for tens of thousands of “invisible” deaths every year. In the latter half of the 20th Century, the relentless expansion of regulation and bureaucracy was justified by specious claims of curtailing safety and environmental risk. However, progress, as we have seen, necessarily entails some level of risk in the short term while reducing overall risk in the long term. Our attempt to build a perfectly safe world, a world made of pillows, threatens to suffocate us all beneath it.
A Negativity Bias
Generally, it’s human nature that which comes suddenly is more likely to grab our attention than that which arises gradually. Indeed, almost by definition, good news comes gradually, while bad news comes suddenly. This ingrained biological propensity creates a strong negativity bias in the media. When a country suddenly descends into civil unrest, the tragedy and destruction make headlines. But the moment peace is restored, and the gradual process of rebuilding ensues, that same country drops off the radar as if nothing further happened there.
The media’s bias toward negativity distorts our worldview. In 2016, Dutch researchers asked 26,492 participants how global poverty had changed in the prior 20 years, only 1 percent of respondents correctly answered that poverty had plunged by some 50 percent. These results were astounding because the answers were multiple choice; respondents had a 20 percent chance of randomly choosing the correct answer. Most could not imagine that the world had actually gotten better for so many people because the headlines fed to them by the media covered only hunger and layoffs, not harvests and hiring.
To be fair, we cannot solely blame the media. In an experiment conducted in Canada, participants were told to read a newspaper article while they “waited for the experiment to begin.” Researchers then watched which articles the unknowing test subject chose to read. Consistently, they sought out negative stories, rather than positive or neutral ones. This cognitive bias is likely rooted in our natural tendency for loss aversion. Our brains have evolved to react more strongly to negative vs positive stimuli. Studies have consistently shown that winning $10, for example, produces far less emotion than losing $10, even if the impact is ultimately the same. Dr. Barbara Fredrickson, a psychology and neuroscience professor at the University of North Carolina-Chapel Hill explained, “Negativity bias evolved because it helped our human ancestors avoid threats to life, limb, and social reputation.”
Deadly Risk Aversion
While risk aversion may have benefitted our ancestors, in modern complex societies our cognitive biases can paralyze social institutions. This has become known as the “precautionary principle,” which can be interpreted as “better safe than sorry.” With increasing enthusiasm, policymakers are willing to take regulatory action against new ideas even when the potential risks are unknown or speculative. As illustrated by Cass R. Sunstein, policies flowing from the “precautionary principle” are often logically inconsistent; they prohibit everything from consideration because all progress engenders some level of risk. Furthermore, they favor current over new risks, even if the latter is a net benefit; stasis is preferred over change. Indeed, the precautions we take to mitigate risk and negative stimuli can be deadly themselves.
The posterchild of the precautionary principle is atomic power. Once promised to make energy cheap and largely pollution-free, events like the Three Mile Island disaster in the US (where no one died) led to risk-averse institutions regulating the industry to an early death. In the wake of the Fukushima Daiichi nuclear accident, for instance, the Japanese government rapidly curtailed the country’s use of atomic power. Researchers found that the government’s (over)reaction to the disaster led to an estimated 1,280 unnecessary deaths attributable to higher energy costs in the cold weather that followed. They summarized, “This suggests that ceasing nuclear energy production has contributed to more deaths than the accident itself.” This analysis did not include the negative impact of switching from nuclear power back to climate-harming fossil fuels like coal, or the fact that coal plants produce more radioactive waste than their atomic counterparts, or that coal mining is itself a dangerous job that has historically taken many lives.
The FDA was originally created to prevent unscrupulous food and drug producers from hawking products that were blatantly unsafe, if downright deadly. The overarching goal was to prevent harm to the public. But as these things tend to go, its power gradually expanded to become, arguably, the very thing it was intended to prevent. In 1906 Congress passed the Food and Drug Act, creating the precursor to the modern FDA. Importantly, at that time, the new agency had no power to review or approve new drugs before they went to the market. It merely had police powers to enforce the law after a violation had taken place, and the law primarily was concerned with truth in advertising/labeling.
This changed in the late 1930s when a company called S. E. Massengill began marketing one of the first antibiotics, known as “Elixir Sulfanilamide.” In its liquid form, the drug was prepared with a solution of diethylene glycol, a toxic chemical akin to antifreeze. Many died, not from the drug itself, but from the preparation solution. In response, Congress passed the 1938 Food, Drug, and Cosmetic Act, which catapulted the FDA into a regulator with the power to review and approve drugs before they could be sold. Crucially, however, the law gave the FDA just 60 days to vet new drugs. If the FDA could not complete its review in 60 days, the drug would automatically be approved. Additionally, the law required that the FDA verify only the safety, not the efficacy, of pharmaceuticals.
Then, in the late 1950s, came a disaster in another continent; Thalidomide. The drug, marketed as a sleep and morning sickness aid, was deemed safe by regulators in Europe, despite not having been tested on pregnant women. The drug turned out to be very toxic to unborn babies, leading to tens of thousands of birth defects and deaths. Thalidomide had not been approved in the US, but the disaster compelled Congress to increase the power of the FDA further. In 1962, Congress passed the 1962 Kefauver–Harris Amendment, which required the FDA to verify that drugs were both safe and effective, it also lengthened the drug review period to 180 days and eliminated the automatic approval provision.
When you stop and think about it this was an odd response. Thalidomide’s issue was safety, not efficacy. The FDA had prudently blocked the drug on the grounds of insufficient safety data. Why exactly was Congress compelled to take additional action? Politics, no doubt. Such illustrates the problem with bureaucracies; they slowly and often irrationally, expand the scope of their authority over time. Since then, FDA reviews have only grown more difficult and lengthy, with delays that now suppress and stifle many life-saving drugs and medical devices, killing tens of thousands in the process and/or subjecting them to needless suffering. The FDA approval process has become so arduous that many drug formulations and devices are never funded at all.
Indeed, the FDA is one culprit behind “Eroom’s Law”, which states that the cost of developing new drugs doubles every nine years on average. It is also one factor explaining why drugs and healthcare are so much more expensive in the US than in other countries of the world. But don’t we need this arduous process to verify drug safety and efficacy? Not necessarily. Foreign regulators routinely approve drugs far more quickly than the FDA does, and even in the US, FDA-approved drugs are often prescribed by doctors for uses that they were not tested for. This so-called “off-label” usage is 100 percent legal and common for people with severe illnesses.
Other industries function quite well without comparable regulatory authority. In electronics, for example, manufacturers can voluntarily submit their products to Underwriters’ Laboratories and obtain their “UL” mark. This is not a legal requirement, but many retailers require this certification for sale. In addition, the tort system and insurers who must insure product risks, naturally hold companies to account. When someone is harmed by a drug or medical device, the company’s reputation is tarnished, they are dragged through court, and often have to pay significant damages to victims. The combination of private certification, courts, insurers, and doctors, ensures that products are largely safe. Of course, there will always be some level of risk.
Even with today’s expansive FDA approval process, thousands die every year from FDA-approved products. While this is tragic, it misses the other side of the equation. As the FDA approval process became more involved over time, economists Sam Peltzman and Dale Gieringer argue, far more people have needlessly died and/or suffered because the drugs or medical devices that they needed to live were either too expensive or not available when they needed them. For example, by 1988 it was well known that taking aspirin reduces the risk of myocardial occlusion. But for many years, the FDA prohibited aspirin manufacturers from advertising this fact. How many people died as a consequence? This is risk aversion gone too far.
The Spread of Risk Aversion
Risk aversion has spread upstream to research itself. Institutional review boards (IRBs), for example, are tasked with reviewing proposed research methods to ensure the safety of participants. IRBs worked well until 1998 when, during an asthma study, a patient unfortunately died. Again, an overreaction ensued. The consequence is that IRBs now exemplify risk aversion gone haywire. In one study that sought to test the transfer of bacteria on the skin, the IRB consent form warned participants of AIDS risk…even though one cannot possibly contract AIDS through the skin. It also warned of a risk of contracting smallpox, a disease that was eradicated in the 1970s. While a single death of a participant is truly tragic, overprotection is more so. Tightening oversight may save one life, but it also makes life-saving research immensely more difficult and costly, indirectly causing the deaths of many more.
The same phenomenon has played out with NEPA, the National Environmental Policy Act. The purpose of NEPA was to protect the environment by giving a voice to environmentalists. NEPA requires federal agencies to assess the environmental impacts of their actions and gives environmentalists the power to sue to hold agencies accountable. The law requires federal agencies to produce a “detailed statement” of the environmental effects of any “major action.” In the 1970s, those assessments typically ran under ten pages in length. However, because NEPA created an avenue for lawsuits, each new lawsuit set a precedent for the next, expanding the scope of NEPA far beyond its original intention.
Today NEPA reviews are typically hundreds of pages in length. The mean preparation time for an Environmental Impact Statement was a staggering 4.8 years in 2020. What began as an effort to curtail risks to the environment, has evolved into a tool that blocks and delays projects that reduce human impact on the environment, like wind farms, solar power stations, and clean natural gas facilities. NEPA now protects the status quo, the polluting coal and oil energy infrastructure, by making it more difficult to build anything else.
The relentless expansion of regulation is broad. The Code of Federal Regulations, the repository where all Federal rules are codified, has grown in length from roughly 70,000 to 170,000 pages from 1975 to 2010. And this is only Federal regulations. Each of the fifty states piles on its own unique rules. Each new rule imparts an accumulating cost to innovation and progress. One study in the Journal of Economic Growth estimated that since 1949, the growing regulatory thicket in the United States slowed economic growth by some 2 percent per year through 2005. Remember that growth is cumulative; had the regulatory levels stayed frozen in their 1949 state, the US economy would have been roughly three times larger in 2005.
Occupational Licensing
Regulation has also spread into occupations themselves. According to The Hamilton Project, in 1950 just 5% of jobs in America required a license to perform them. Today, that figure is as high as 30%. This is due, in part, to shifts in the American labor force toward high-skilled services and away from manufacturing. It is also due, however, to overzealous state regulators and lobbying by professional groups under the specious guise of “safety and health.”
Studies examining the effects of occupational licensing have not found any discernible positive outcomes for public health and safety. For example, Kleiner and Kudrle found no correlation between the difficulty of passing a dental license exam with the quality of dentistry. Similarly, more stringent licensing of mortgage brokers does not result in fewer foreclosures. Licensing requirements often have little connection to the occupation they are supposed to be regulating. For example, 16 states require a cosmetology license to perform hair braiding. The main component of a cosmetology license is the proper use of chemicals, a skillset that is not needed for braiding hair.
Many licensing requirements are illogical, unscientific, and unverifiable. In Michigan, it takes 1,460 days to become an athletic trainer, but only 26 to become an emergency medical technician. In some jurisdictions, fortune-tellers even require a license. How could one even judge the quality of a fortune-teller for licensure? Additionally, licensing has spread beyond those occupations that might pose a genuine risk to health and safety. For example, there is no reason that interior designers, auctioneers, travel guides, or scrap metal recyclers need a license to work, yet many states require them anyway.
The narrative that licensing is necessary to protect public health and safety is, for the most part, false. If you still don’t believe it, consider this: licensing rules usually grandfather existing practitioners into the new regime. This means practitioners established before the licensing rules often do not have to meet their own industry standards. In other words, the rules often only apply to newcomers…not themselves. Reading between the lines, more often than not, licensing is a rent-seeking mechanism used to restrict new competition. It’s not for you, it’s for them.
When competition is restricted, prices are forced upward. One study found no difference between the quality of floral arrangements made by florists in Louisiana (who must be licensed) with those in Texas (where they do not). Yet, consumers paid more for floral arrangements in Louisiana. Licensing restricts economic mobility, the brunt of which lands on the poor who have greater difficulty jumping through the legal hoops required to obtain a licensed job. The poor frequently lack the luxury of time and money to pay licensing fees and take courses, forcing them instead to rely on lower-paying jobs to meet short-term needs. Licensure, in effect, helps trap many in a continuing cycle of poverty. Indeed, estimates suggest that occupational licensing results in 2.8 million fewer jobs in America and costs consumers some $203 billion annually.
In short, occupation licensing is often an unnecessary barrier to social mobility and human potential. It is just another outgrowth of the relentless expansion of the regulatory industrial complex that has metastasized in our world. Our natural fear of loss, our built-in risk aversion, has allowed precaution to prevail over progress. We’ve tried to make the world safe by lining it with pillows, but now we’re suffocating beneath them. This, of course, does not mean that we ought to abolish all regulators and regulations or act with wanton disregard for safety. Some regulation is beneficial. The future requires a proper balance. A brighter 21st Century needs to address the challenge of bureaucratic bloat and risk aversion, to balance short-term risk with long-term prosperity.
You also may like…
Another important distinction between regulations are those that are performance based versus those that are process based.
The Clean Air Act and Clean Water Act are performance based. There is an evidence based process at EPA to set acceptable pollution levels based on best available science. This involves direct open debates about health impacts and cost benefits. Then from those standards permitting systems are developed to reduce pollution below the limits.
Process based regulations like NEPA are completely different. They have no objective, agency defined standards. They are supposed to be public disclosure documents but decades of lawsuits make them ever larger and larger.
For the FDA how much of the growth in expenses for approval is tighter standards and how much is more elaborate processes? I see the IRB craziness as a process step gone awry.
I basically agree with your thrust.
Conceptually, however the issue is the way risk aversion is arbitrarily scattered around. It makes perfect sense to me for regulators to value a human life or QALY higher now than in 1900 or 19040 pr even 1970. What does NOT make sense is to value a life lost to nuclear power plant melt down to be valued more than a life lost to coal fire electricity plant emission or the life lost to the side effect of a vaccine more than the life lost from the delay in approving a vaccine while the first life loss was assessed.