Risk & Progress| A hub for essays that explore risk, human progress, and your potential. My mission is to educate, inspire, and invest in concepts that promote a better future. Subscriptions are free, paid subscribers gain access to the full archive, including the Pathways of Progress and Realize essay series.
The emergence of Artificial Intelligence is looked upon with either foreboding fear or anxious hope. Few possess emotions in between. There are good reasons to be concerned about rouge “digital gods” that could threaten our civilization. On the other side of all great risks, however, are great opportunities. What if these gloomy predictions prove incorrect? Far from being a harbinger of destruction, AI could instead be our greatest creation; the savoir of our civilization, and the ultimate accelerator of human progress.
Alignment Fears
Those who view AI with fear in their eyes worry about alignment. If we create machines of equal or greater intelligence than ourselves, can we ensure they are “aligned” with human values? The famous “paper clip” thought experiment is used to illustrate this problem. Imagine we gave a super-intelligent AI a mission to maximize the manufacture of paper clips at a factory. For a human with embedded human values, this mission carries implicit limitations as to how far one goes to achieve this goal. We understand that “maximizing” output means taking in raw material and using it as efficiently as possible within the bounds of reason and decency.
An AI, however, would not necessarily understand or concern itself with these implicit values. Given a mandate to produce as many paperclips as possible, a superintelligent AI could go Machiavellian, using its resources in ways its human handlers didn’t intend. It could, for example, begin destroying everything around it to gather more raw material. It could enslave humans and other machines to assist, and when engineers try to pull the plug, it could have them killed; anything and everything that stands in the way of its given mission must be eliminated. For a misaligned AI, the end justifies the means.
Many argue that alignment fears are overblown. Indeed, a super-intelligent AI, one resourceful enough to manipulate, deceive, or kill to achieve its goals could probably understand the implicit human values embedded in any instruction. In his piece, “Why AI Probably Won’t Kill Us All,”
argues that while alignment is certainly a challenge, it can be managed. We already deal with principal-agent problems with our fellow humans, so we can develop strategies to prevent rouge AIs. We could also leverage the power of AI itself to construct guardrails for new development. We could, for example, have an army of well-trained and aligned AIs standing guard. A similar strategy is already employed at OpenAI. If the ChatGPT chatbot begins violating OpenAI’s terms of service, a secondary AI, always listening or watching over the conversation, cuts it off.Should the alignment problem be sufficiently managed, the upsides are almost unfathomable. For the first time we will produce machines that can replace, or at least augment human cognitive labor. In our study of human progress, we have learned that material progress or “economic growth” is simply the accumulation of knowledge over time. With knowledge, we can harness energy and discover ever more advantageous combinations of atoms. These atoms, as we have seen, are used more efficiently as our knowledge base grows, as is the energy required to manipulate them. The only limiting factor to progress, therefore, at least in the near future, is knowledge.
In this century, humanity faces two unprecedented perils: the dual threats of the innovation “Red Queens Race” alongside a falling global population. The former suggests that new ideas are getting harder to find as the low-hanging fruit was picked long ago, while the latter leaves fewer humans to find them. Soon we will live in a world where continuing Moore’s Law requires more engineers than humanity can supply, where new drugs are needed more than ever, but the supply of researchers and capital is insufficient to create them. Together, these trends will slow the accumulation of knowledge in the 21st century relative to the 20th.
Some argue, as I have, that should the pace of knowledge accumulation slow enough, progress as we know it is at stake, raising the possibility of existential threats to civilization. Worse yet, there doesn’t seem to be anything we can do about it. We cannot re-pick the low-hanging ideas nor have efforts to increase fertility borne much fruit themselves. Humanity finds itself, arguably, on the brink of global economic and technological stagnation, staring down the barrel of a return to a zero-sum world, where wealth and growth only come from stealing from those who have more. We know this world well for it defines most of human history, it’s a world that breeds chaos, violence, and despair. We don’t want to go back.
Deus ex machina
Unless a deus ex machina arrives, that is. The Latin term “deus ex machina” refers to a plot device used in stageplays where the main characters find themselves in an unsolvable problem or situation that is suddenly resolved by an unexpected or unlikely occurrence. In Roman/Greek times, this often meant the arrival of a new character playing a god or goddess, who was lowered onto the stage from above by a crane. While a deus ex machina might be evidence of a lazy screenplay, we have been here before. Every time human civilization faced an intractable problem, an innovation arrived to save the day. Just as natural guano deposits used for fertilizer began running short, we invented nitrogen fixation. Just as population growth looked set to outstrip the food supply, new variants of wheat and rice were developed.
Today we face the opposite problem. In a world about to step off the precipice of a population cliff, AI may hold the solution to continuing growth and progress. The AI optimist holds that, despite the prospect of shrinking populations and ideas getting harder to find, humanity is on the brink of so-called “explosive growth,” often defined as a Gross World Product (GWP) growth of ~20-30 percent per year. This growth rate is roughly an order of magnitude faster than we currently are capable of. Instead of human progress slowing or stagnating, they hold that we are on the brink of untold prosperity.
If this sounds a bit far-fetched, you wouldn’t be alone in thinking so, but there is a decent chance that such explosive growth is possible. After all, economic growth has already accelerated over the past 12,000 years, and even more so in the past 300. For most of human history our “GDP growth” was stuck at or near zero, before accelerating to about 0.03 percent per year around 5000 BCE with the first agricultural revolution. It accelerated again around 1400, to about 0.3 percent per year, and then to about 3.0 percent per with the Industrial Revolution after 1800. If you had asked someone in 1700, should they be able to understand the concept of “GDP growth,” they also probably wouldn’t have believed that 10x growth was possible, yet here we are.
Commonly used models of economic growth treat the world like one giant factory. We model the growth of “inputs,” namely labor, human capital, capital, and technology, which results in production and eventually consumption. “Ideas-based” economic models work roughly as follows: more ideas → more output → more people → more ideas. This creates a super-exponential growth feedback loop that eventually leads to infinite growth. Interestingly, if we extrapolate human historical growth data using these models, we find a high probability of explosive growth occurring before 2100.
Yet, economists agree that as of late, we are not accelerating, global economic growth seems to be steady or even slowing down. The reason is that since ~1880 or the start of the Second Industrial Revolution, growth in frontier economies did not lead to more people as expected. The total population was, and is, still growing, but fertility rates began to fall. This didn’t stop economic growth, but it did halt the acceleration of that growth as human physical and cognitive labor is one of the primary inputs in the model. In other words, because we are producing fewer people who can generate and find new ideas, we have been “off” the super-exponential growth path for a while. This also illustrates why a shrinking total population is so worrisome.
That said, it’s hard to draw any firm conclusions from ~150 years of data. Models might project smooth lines on a graph, but reality rarely pans out this way. The relative slowdown as of late could prove to be a temporary blip. Indeed, should we develop AI that can effectively substitute human physical and cognitive labor, the ideas feedback loop would resume, albeit with machines instead of humans, restoring the pre-1880 super-exponential growth path. “Labor” becomes accumulable again, replaced by capital, enabling a feedback loop that drives faster and faster growth. Notably, this growth can be achieved despite the innovation “red queens race” that emerged in the late 20th century.
Some have raised questions about whether or not such growth is realistically attainable and if it is, whether humanity can survive or adjust to this pace of growth. In The limits to (explosive) growth,
suggests that such rates are unlikely to be attained, at least not for very long. Yet, there are reasons to believe that it is possible. Businesses can still be managed with 30 percent growth rates, biological populations have demonstrated 30 percent growth rates as well. Even looking at our own history, global growth is already 100 times faster than it was a millennia ago. Humans have proven adaptable so far. If anything we have prospered more than ever by growing faster.To be fair, the prospect of AI replacing missing humans does engender questions about the value of our labor in the future. Historically speaking, automation has always created more jobs than it destroyed. As loathe I am to say it “this time” could very well be different. It stands to reason that if and when machines effectively replace human labor such that they can put us back on a “super-exponential” growth path, they could also collapse the value of human labor, both cognitive and physical, to near zero.
agrees, noting that while historically we have been able to move to different occupations as automation advanced, we will “…run out of new tasks to move to when AGI surpasses humans in fluid general intelligence.”It is here where the potential of Universal Basic Income and its many variations may come into play. To be certain, opinions vary greatly on the subject, with clever thinkers like
and arriving at wildly different conclusions about the utility of UBI. Should the rapid growth that AI brings also give rise to mass unemployment, these are political and social questions that we will need to resolve. However, I wager that these are good problems to have; better to have too much growth than too little.Time will tell, but if history is any indication, AI could be humanity’s dues ex machina, saving us from the unsolvable fertility crisis and innovation red queens race. If it does, the “god” will be lowered onto the stage, transforming a tragic stageplay of sputtering progress, into a comedy of untold growth and prosperity, just at the moment that all seemed lost. Indeed, it is perhaps poetic irony that, translated into English, “deus ex machina” literally means "god from the machine.”
You also may like….
Really enjoyed reading this take on the possible upsides of AGI as our saving grace or “Deus Ex Mechina”. It does seem a bit of a stretch (or dare I say a dodge) to paper over the risks of an AI system going to extremes to meet its stated goal by assuming that we can just assign that problem to another AI. What if they begin to collaborate or the first AI is smart enough to outwit the second AI? Well then we create a third AI to protect against that situation. And another and another. It’s AI turtles all the way down.
Also, if declining fertility is the original problem (I suspect that runaway population growth spurs all sorts of social problems that declining fertility could resolve but that’s a different topic.) Then isn’t THAT the problem that AI should be focusing on, not replacing the missing bodies?
Also 2, I suspect that there are other limiting factors to rampant progress that at best, will constrain the progress and at worst, cause a decline in quality of life. These include environmental, climate change issues and just humanity’s capacity to absorb this much progress in a limited amount of time. We’re having cultural battles of bathrooms for God’s sake!
What this really boils down to is the super progressive society as imagined is subject to the classic “we don’t know what we don’t know” peril. Not a reason to stop research but definitely a reason to temper expectations.
Lastly, in the sentence:
“There are good reasons to be concerned about rouge “digital gods” that could threaten our civilization.”
I always thought “digital gods” came in more of a crimson red but they are equally roguish. ;^)
-jgp
Great article!