Risk & Progress| A hub for essays that explore risk, human progress, and your potential. My mission is to educate, inspire, and invest in concepts that promote a better future. Subscriptions are free, paid subscribers gain access to the full archive, including the Pathways of Progress and Realize essay series.
Author and futurist Arthur C. Clark once remarked, “Any sufficiently advanced technology is indistinguishable from magic.” The First and Second Industrial Revolutions birthed machines that augmented our bodies, allowing humans to produce and build beyond our physical limits. What followed after 1960, according to Erik Brynjolfsson and
, was “The Second Machine Age.” The Second Machine Age was different; it augmented our minds, not our bodies. The advancement of computers and microelectronics enabled the creation of devices that, if one didn’t know better, border on sorcery. After 1960, we became wizards in our own time.From Computer to Calculator
It seems strange to say this today, but in the 1950s a “computer” was an occupation, not a machine. A “computer” was a person tasked with doing complex mathematical calculations before the advent of advanced electronics. Just 50 years later, the occupation had completely vanished, and its name was assigned to the electronic machines that could perform the same calculations 100+ times faster. Early “computers” were large machines that filled entire rooms, and consisted of thousands of vacuum tubes.
The first general-purpose “computer” was UNIVAC 1 (1951), inspired by the famous WW2 computer “ENIAC.” UNIVAC 1 used over 6,000 vacuum tubes, weighed 7.6 tons and took over 30 minutes to power down. The machine required multiple operators and sold for over $1 Million in 1950s dollars. Just as UNIVACs were being installed across the country, a groundbreaking innovation was already stirring that promised to make computers smaller and more affordable: the transistor. The semiconductor transistor could perform the same function as a vacuum tube but was smaller, lighter, faster, cheaper, consumed less energy, and generated less heat. By the mid-1950s, the first “transistorized” computers, like the IBM 608, were already on the market.
The “transistor computer” was better in every way compared to its tube-driven ancestors, but was still expensive and filled an entire room. As it turned out, engineers were already working on a solution for this: the integrated circuit or IC. The problem with transistors and vacuum tubes is that they had to be soldered to the computer’s circuit boards one at a time. This labor-intensive process kept computers expensive and large. Engineers realized, however, that they could use light to “etch” several transistors onto a “chip” of silicon all at once.
This innovation had the effect of 1) enabling the transistors to be made smaller and 2) allowing many to be “printed” at one time, reducing the cost per transistor. By the late 1950s, the key technical challenges for ICs were overcome, setting the stage for them to storm the marketplace in the 1960s. The IC increased the capability of computers while shrinking their size and cost, making the first “minicomputers” possible. These machines, like the PDP-8, appeared in the mid-1960s and were about the size of a filing cabinet, costing “just” tens of thousands of dollars.
Just as the minicomputers hit the market, it became evident that something special was happening in the computing sector. In 1965 Gordon Moore made his famous observation of ICs, noting that the “complexity for minimum component costs has increased at a rate of roughly a factor of two per year.” His observation has often been rephrased to predict that computing power for a given price would double annually. It should be noted that Moore’s statement was an observation, not a forecast, though he did predict this trend continuing until 1975. “Moore’s Law” as it became known, however, kept going, and going, and going.
Beginning in the early 1970s, specialized ICs known as “microprocessors” became central to all new computers, including the first “microcomputers” which we today call “desktops.” These “microcomputers,” like the famous Apple II, cost just a few thousand dollars (in 1970’s dollars). They were only possible due to advances in microelectronics, which by the end of the 1970s, made it possible to etch tens of thousands of transistors onto a single chip.
Progress has not stopped since. By the close of the 1980s, over a million transistors could be placed on a single microchip. The smaller and denser the transistors could be made and packed together, the more capable computers became. In every instance where it appeared that Moores's Law was near its end, the “social supercomputer” worked around the challenges and kept it going. By the year 2000, transistors per chip reached the tens of millions. By 2010, into the billions, and then tens of billions by 2020.
Microelectronic advancements kept making computers smaller, lighter, faster, and more useful, driving demand for better batteries, screens, and cameras. The technology spilled over and spread into other domains, including sensors, and chips that could “see” light. The first modern digital camera was invented at Kodak in 1975. In 1991 the first digital SLR, the DCS 100, was released. This camera cost $13,000 in 1991 dollars, had a 1.3-megapixel resolution, and stored images on a 10-lb hard drive/power unit that had to be slung separately over the user’s shoulder. Just 20 years later, a vastly more capable camera, the size of a fingernail, was included on virtually every new smartphone produced.
Cybernetic Organisms
By the 1990s, Cathode Ray Tubes (CRT) began to give way to LCDs, as they were thinner, lighter, and used far less energy. New rechargeable batteries, like Nickel-Metal Hydride, or NiMH batteries, could store more energy in a smaller and lighter package. Together NiMH batteries and LCDs allowed computers to go mobile. In the 2000s, NiMH batteries were quickly supplanted by an even more energy-dense technology: Lithium Ion. New solid-state memory chips began to replace the venerable “hard drive,” storing information in a smaller, lighter package, they could read and write faster, using less energy. Processors became more powerful, while new “Reduced Instruction Set” architectures cut their energy consumption and eliminated the need for cooling fans. Capacitive touchscreen displays replaced the mouse and keyboard. The smartphone was born.
Almost in a puff of smoke, the smartphone transformed our species into cybernetic wizards. We can lift a smooth shard of black glass from our pockets and summon capabilities that would appear as magic to our ancestors. Upon that dark shard, we can light the darkness, touch invisible buttons, tell the time of day, predict the weather, record still and video images with breathtaking realism, read books, receive and send mail, do math, create music, or otherwise learn just about anything or communicate with just about anyone on the planet who has a shard of their own.
Our smartphone is, for most of us anyway, a second brain. In this second brain, we store our contacts, we record our memories, we plan our days and weeks, and we extend our senses and our voices. We have become cybernetic organisms; we have developed a symbiotic relationship with our electronic creations. The primary limitation in this new relationship is not computing power, but bandwidth. The “data transfer” from and to our “second brains” is slow, limited to the speed of our fingers or speech. Perhaps it is our inevitable destiny to remove this constraint as well and establish a direct connection between our computers and our brains; to merge human and machine.
You also may like…
I believe the trajectory outlined in your post resonates strongly with where humanity is headed. The ongoing work to enhance the bandwidth between our brains and external devices or a chip in the body/head, as well as future breakthroughs like CRISPR, suggests we will eventually blur the line between biology and technology. The potential to create a new species of 'sapien wizards'—humans augmented with chips and genetic modifications—raises serious questions about our future.
While these advancements could vastly increase our physical and mental capabilities, they also carry the risk of unintended consequences. For example, integrating AGI into this equation could amplify our progress or render humanity obsolete.
Our current relationship with technology is symbiotic, but the leap to a direct human-machine merger could fundamentally change what it means to be human.
Ultimately, our desire to play the role of creator could lead us to a utopian future where human potential is boundless—or to extinction, should we lose control of the tools we create. While the outcome is uncertain, we may be getting to a critical juncture in the future where the choices we make will shape the destiny of our species.
That's what Elon wants to connect directly to our brain cells next, right?!