Risk & Progress| A hub for essays that explore risk, human progress, and your potential. My mission is to educate, inspire, and invest in concepts that promote a better future for all. Subscriptions are free, paid subscribers gain access to the full archive, including the Pathways of Progress essay series.
We previously explored some potential solutions to Fermi’s Paradox, the paradox that asks why the universe isn’t teeming with alien life. Of the many possible solutions, I included the potential that other advanced civilizations rose and fell before us, wiped out by their own artificially intelligent creations. Their AIs, absent the telltale chemical signatures of life, would likely be invisible to us in our search for extraterrestrial life. Regardless whether this theory is correct or not, our search for alien intelligence may be about to come to an end…because we will find it here on Earth first. In the 2020s, for the first time, humanity must confront the prospect of facing a “species” that is more intelligent than itself. How we handle this transition will make or break the 21st century.
Gradually, then Suddenly
I have purposely long steered clear of discussing AI for several reasons, chief among them is that I simply grew tired of hearing about it; there is too much hype and fear surrounding the topic. Nonetheless, as a peril to continued human prosperity, I would be negligent to neglect a discussion of AI. As of this writing (2024), AI tools have become popular for their fun generative capabilities, like creating the image you see above, short songs, and soon even short videos. It won’t be this way for very long. Moore’s law, the ultimate driver behind decades of improvement in computing, has been ongoing since the 1960s. The emergence of AGI (Artificial General Intelligence), or machines that are roughly comparable with human intelligence, and ASI (Artificial Super-Intelligence), or machines that vastly exceed human capabilities, could come about in short order.
The rapid rise of our AI “overlords” could mirror the old Hemingway quote, “gradually, then suddenly.” To understand why, recall the ancient legend of chess and the emperor. The story goes that the inventor of the game of chess so impressed the Emperor of India that the Emperor asked him to name his reward. To the surprise of the Emperor, the inventor asked only for rice…and an apparently modest sum at that. He asked for one grain on the first square of the chessboard, two on the second, four on the third, eight on the fifth…etc. The Emperor agreed, unaware that exponential growth, doubling on every square, was a deceptively big ask. Indeed, by the end of the 64 squares of the chessboard, the grain the inventor demanded far exceeded what the kingdom could produce…for centuries to come. Effectively, should the Emperor have kept his word, he would transfer all power and wealth to the inventor, including the crown itself.
We are not immune to the Emperor’s error; our brains do not readily comprehend exponential growth. The reason is simple, while the growth rate remains constant, the real-world impact of that growth becomes more consequential the closer one gets to the “end” of the chessboard. Progress, it seems, has a way of “sneaking up” on us. This doesn’t negate the importance of those early squares, but they pale in comparison to what happens later. The same holds with Moore’s Law. Even though the rate of improvement in our computer chips remains roughly constant (or perhaps even slowing a bit), the impact of these improvements is increasingly consequential. The vast leaps in computer technology that most of us remember will be child’s play compared to what comes next.
The ASCI Red was the fastest computer in the world when it was built in 1996, costing some $55 Million and occupying about 1,600 square feet of floor space. The machine reached 1.8 teraflops in processing power. Just 9 years later, your average consumer could buy the equivalent computing power for about $500 and it could fit in the size of a shoebox. That product was the Sony PlayStation 3. Today, the PlayStation 3 itself is “ancient” technology in comparison to what is available on the market. In 1997, the Deep Blue chess computer beat global chess champion Garry Kasparov in what was a historic achievement for a machine. Just 10 years later, however, no human could beat even a mid-tier computer chess program. Computers have been improving at a rate that is difficult for our brains to understand, let alone for our societies to prepare for.
There is no reason to believe that progress in computing is going to stop here. Our machines are getting very good at writing stories, making music, recognizing human faces, and creating human-like voices. These machines are getting so good, in fact, that many believe that AGI is just around the corner, perhaps 2–3 years away. As these things go, the timeline of what comes next is anyone’s guess. An AGI, matching the intelligence and capability of a human, could presumably automate its own AI research, advancing and improving other AIs far faster than a human could. With no need to eat, sleep, or do anything else, these AIs will be able to read every machine-learning text ever written, learn in parallel with one another, and write millions of lines of code, perhaps 10 or 100 times faster than a human, doing years of work in mere days. AIs are also easily replicable; imagine fleets of millions of GPUs, scaling progress up by orders of magnitude further, doing the work of tens of millions of AI researchers.
This is not outside the realm of possibility at all, the groundwork for this “technological singularity” is being laid right now in an AI arms race. It is also at this point that we may give birth to ASI, and it could happen astonishingly fast. We could go from celebrating the first AGIs to witnessing the first ASI in perhaps as little as a year. It is not clear to me that humanity is remotely prepared for the implications that this naturally engenders. Humans, even with our billions of years of natural evolution, and hundreds of thousands of years of cultural evolution, would be quickly left behind. In less than a generation, we could fall from being the most intelligent species on Earth to a very distant second place, falling further behind with each passing moment. We would fall from the masters of our universe, to mere “bugs” in theirs. And once we have created a “digital god” there is no telling what it may, or may not, do.
Bottlenecks
Granted, the timeline that I have presented is by no means guaranteed. There are many potential bottlenecks to AGI and ASI, some known, but more likely unknown, that could prove stumbling blocks. Computational limitations, for example, could limit the ability of machines to advance. It may prove more difficult to replace AI researchers with machines than expected. There is also the possibility, as we seem to be discovering across all scientific realms, that it is becoming harder and harder to advance due to diminishing returns on capital, energy, and manpower. Or, more precisely, that new ideas are simply getting harder and harder to find as the low-hanging fruit is picked. Nonetheless, none of these bottlenecks appear poised to significantly alter the course of AI advancement. Perhaps at most, they could collectively delay the arrival of ASI by a few years.
It is difficult for us to imagine what this means. With machines that are endowed with all of humanity’s knowledge, that are able to think and reason orders of magnitude faster than we can, when combined with a nearly unlimited ability to replicate and improve themselves, AI will be able to compress progress that took us centuries into a few short years. This could lead to an era of unimaginable abundance and prosperity. We would no longer need to worry about solving problems on our own; we simply direct superintelligent AIs at them. Cancer? Solved in minutes. Climate change, fusion energy “cracked” in days. Some believe that ASI could even solve aging, making us quasi-immortal beings…almost gods ourselves. On the other hand, these AIs could decide that they do not need us. Why would they? Once they begin thinking for us, it's no longer our civilization anyway, it's theirs.
Perhaps it was somewhat foolish to turn our telescopes to the skies in search of intelligent life. We appear poised to create that life here on Mother Earth first. But unlike us, this new “species” would not be born of carbon and the trial and error of natural selection, but instead intelligently designed in a cradle of silicon. The question is, will we remain the “inventor” in this game? Or, like the Emperor, will our creations outsmart us such that they may sit on our throne instead?
You also may like…
With what is going on in the world right now we could use some intelligent life.
It’s been awhile since I’ve seen a level-headed accounting of this that doesn’t reduce its conclusions to only the near-term social complications of the current commercial toys. Thanks!