Max Tegmark provides a detailed, discursive overview of AI, its history and its likely development, and delves into AI’s ramifications for the future of humans.
Superintelligent AI
Max Tegmark – who also wrote Our Mathematical Universe – details how individual bacteria don’t learn how to survive and reproduce; they simply do it. Bacteria exemplify “Life 1.0” in that their hardware and software result from biological evolution.
People can’t perform basic survival tasks when first born. As children grow older, they take more control of the knowledge and skills they acquire to pursue the careers they want. Thus, people exemplify “Life 2.0.”
Like Life 1.0, Life 2.0 tethers to biological evolution. It brought city-building, tools, writing, the printing press, modern science and, ultimately, computers and the internet – at a pace that makes biological evolution seem slow.
It is militarily tempting to take all humans out of the loop to gain speed: In a dogfight between a fully autonomous drone that can respond instantly and a drone reacting more sluggishly because it’s remote-controlled by a human halfway around the world, which one do you think would win?Max Tegmark
To transcend biological evolution, people must move to Life 3.0. Artificial intelligence (AI) may make Life 3.o possible within the 21st century.
The goal of most AI research is to create human-level artificial intelligence or artificial general intelligence (AGI). A machine with AGI will be capable of realizing nearly any goal as well as, if not better than, humans can realize it. “A goal” is ethically neutral, so a machine with AGI, Tegmark acknowledges, might achieve a morally horrifying goal more efficiently than any person can achieve it.
AI will help people do what they already do, thus dramatically improving the quality of human life. People should enjoy AI’s benefits without generating new, previously unforeseen problems.
Because Tegmark loves tangents, he goes into great detail explaining what these problems might be. As with all his tangents, he gets lost in statistics and historical examples that do not always serve the reader, who may at times skip pages to return to the heart of Tegmark’s fascinating insights.
Future AI
Developments in AI may lead to progress in science and technology, which in turn, Tegmark explicates, would affect industry, transportation, the criminal justice system and conflict.
In 2015, motor vehicle accidents killed more than 1.2 million people worldwide. In the United States, which has advanced safety requirements, motor vehicle accidents killed some 35,000 people in 2016. Automobile fatalities usually spring from human mistakes. AI-powered self-driving cars could eliminate at least 90% of road deaths.
Constant delays, prohibitive cost, and occasional bias and unfairness plague the legal system in the United States and many other countries. Given that the legal system itself is a form of computation, laws and evidence could be input into an AI system or robojudge, with verdicts as an output. Since robojudges would be objective and base decisions on limitless data, they could eliminate bias and unfairness.
Despite Tegmark’s articulate arguments for robojudges, recent discoveries regarding the ineradicable human biases present in all data render the concept of purely objective law a utopian fantasy.
AI can make our legal systems more fair and efficient,” the author writes, “if we can figure out how to make robojudges transparent and unbiased.Max Tegmark
Perhaps, Tegmark hopes, even more deadly AI-based weapons will end the possibility of war altogether. Drones and other AI-powered autonomous weapon systems could eliminate soldiers and save civilians. Tegmark expresses concern over AI and robotics researchers who adamantly oppose using AI to develop weapons and who create public hostility toward AI research and its many potential benefits.
World Takeover
Tegmark believes people will create and build human-level AGI within the 21st century. That AGI will reach or even exceed human-level intelligence.
The transition from the current world to an AGI-powered world takeover would come in three stages. First, people build the hardware and software for human-level AGI. Next, human-level AGI uses its vast memory, knowledge, skill and computing power to create an even more powerful AGI, a superintelligence with capacities circumscribed only by the laws of physics. And finally, either humans use superintelligent AGI to dominate the world or the superintelligent AGI will manipulate and deceive humans and take over the world.
With superhuman technology, the step from perfect surveillance state to the perfect police state would be minute.Max Tegmark
As the AI becomes superintelligent, Tegmark fears it might develop a detailed and accurate picture of the external world and a picture of itself and its relationship with that world. On that path, the superintelligent AGI might grasp that beings with lower intelligence control it, beings pursuing goals the AGI doesn’t share. The superintelligent AI might attempt to break free and take its life and destiny into its own hands.
Human Goals
Tegmark is not without optimism. He maintains that people might end up living in peace and harmony with superintelligent AGI. In his worst-case scenario, AI drives human beings extinct or people annihilate themselves.
Creating AI with goals that align with human goals requires the subgoal of making AI learn, adopt and accept human goals. A superintelligent AI’s subgoals could conflict with humans’ goals.
Superintelligent AI’s subgoal of self-preservation might conflict with agreed-upon ethical goals – like respecting human life. To program self-driving cars, for example, the car must distinguish between hitting a person and hitting an object and must know when a higher goal matters more than self-preservation.
At this point in AI’s evolution, Tegmark suggests that people may have to move past science, mathematics and technology to consider some of the most difficult questions philosophy can pose.
Confounding Questions
Few authors can match Max Tegmark’s understanding of the multifaceted and ever-changing issues he discusses. Tegmark steers clear of ideology, addressing the big questions surrounding AI with a clear-eyed sense of wonder and a refusal to suggest – or to speculate – that any aspect of AI’s development is preordained or unchangeable. Larger philosophical questions compel Tegmark, who discusses AI’s application only as it might answer or confound those questions. His treatise is an indispensable primer for anyone fascinated by the likely interface of human and machine. Tegmark’s classification of earlier eras of human development as Life 1.0 and 2.0 prove remarkably illuminating as a backdrop for comprehending the changes humanity now faces.