Max Tegmark provides a detailed, discursive overview of AI, its history, and its likely development and delves into AI’s ramifications for humanity’s future.

Superintelligent AI
Max Tegmark – who also wrote Our Mathematical Universe – details how individual bacteria don’t learn how to survive and reproduce; they simply do it. Bacteria exemplify “Life 1.0” in that their hardware and software result from biological evolution.
People can’t perform basic survival tasks when they’re born. As children grow older, they take more control of the knowledge and skills they acquire as they mature, so they eventually can pursue the careers they want. Thus, people exemplify “Life 2.0.”
Like Life 1.0, Life 2.0 is tethered to biological evolution. It developed tools, settlements, writing, the printing press, modern science, and, ultimately, computers and the internet – at a pace that makes biological evolution seem slow.
It is militarily tempting to take all humans out of the loop to gain speed: In a dogfight between a fully autonomous drone that can respond instantly and a drone reacting more sluggishly because it’s remote-controlled by a human halfway around the world, which one do you think would win?Max Tegmark
To transcend biological evolution, people must move to “Life 3.0.” Artificial intelligence (AI) may make that possible within the 21st century.
The goal of most AI research is to create human-level artificial intelligence or artificial general intelligence (AGI). A machine with AGI will be capable of realizing nearly any goal as well as, if not better than, humans can realize it. “A goal” is ethically neutral, so, as Tegmark acknowledges, a machine with AGI might achieve a morally horrifying goal more efficiently than any person could.
People should enjoy AI’s benefits without generating new, previously unforeseen problems. It will help people do what they already do, thus dramatically improving the quality of human life.
Because Tegmark loves tangents, he goes into great detail explaining what these problems might be. As with all his tangents, he gets lost in statistics and historical examples that do not always serve the reader, who may, at times, skip pages to return to the heart of his fascinating insights.
Future AI
Developments in AI may lead to progress in science and technology, which in turn, as Tegmark explicates, would affect industry, transportation, the criminal justice system, conflict, and much more.
In 2015, motor vehicle accidents killed more than 1.2 million people worldwide. In the United States, which has advanced safety requirements, motor vehicle accidents killed some 35,000 people in 2016. Automobile fatalities usually spring from human mistakes. AI-powered self-driving cars could eliminate at least 90% of road deaths.
Constant delays, prohibitive costs, and occasional bias and unfairness plague the legal system in the United States and many other countries. Given that the legal system itself is a form of computation, laws and evidence could be input into an AI system, a sort of robojudge, with verdicts as an output. Since robojudges would be objective and base their decisions on limitless data, they could eliminate bias and unfairness.
However, despite Tegmark’s articulate arguments for robojudges, recent discoveries regarding the ineradicable human biases present in all data make the concept of purely objective law a utopian fantasy.
AI can make our legal systems more fair and efficient…if we can figure out how to make robojudges transparent and unbiased.Max Tegmark
Perhaps, Tegmark hopes, even more deadly AI-based weapons will end the possibility of war altogether. Drones and other AI-powered autonomous weapon systems could eliminate the need for soldiers and save civilian lives. Tegmark expresses concern over AI and robotics researchers who adamantly oppose using AI to develop weapons and who create public hostility toward AI research and its many potential benefits.
World Takeover
Tegmark believes people will create and build human-level AGI within the 21st century. That AGI, he projects, will reach or even exceed human-level intelligence.
The transition from the current world to an AGI-powered world takeover would come in three stages, according to Tegmark’s reckoning. First, people build the hardware and software for human-level AGI. Next, human-level AGI uses its vast memory, knowledge, skill, and computing power to create an even more powerful AGI, a superintelligence with capacities circumscribed only by the laws of physics. And finally, either humans use superintelligent AGI to dominate the world or the superintelligent AGI manipulates and deceives humans and takes over.
With superhuman technology, the step from perfect surveillance state to the perfect police state would be minute.Max Tegmark
As the AI becomes superintelligent, Tegmark fears it might develop a detailed and accurate picture of the external world and a picture of itself and its relationship with that world. On that path, superintelligent AGI might grasp that it is being controlled by beings with lower intelligence, beings pursuing goals the AGI doesn’t share. The superintelligent AI might attempt to break free and take its life and destiny into its own hands.
Human Goals
In Tegmark’s worst-case scenario, AI drives human beings extinct or people annihilate themselves. However, he is not without optimism. He maintains that people might end up living in peace and harmony with superintelligent AGI.
Creating AI with goals that align with human goals requires fulfilling the subgoal of making AI learn, adopt, and accept human goals.
One concern is that a superintelligent AI’s subgoals could conflict with humans’ goals. For example, superintelligent AI’s subgoal of self-preservation might conflict with agreed-upon ethical goals – like respecting human life. For example, the program that runs a self-driving car must enable it to distinguish between hitting a person and hitting an object and must know when a higher goal – such as not injuring a person – matters more than self-preservation.
At this point in AI’s evolution, Tegmark suggests that people may have to move past science, mathematics, and technology to consider some of the most difficult questions philosophy can pose.
Confounding Questions
Few authors can match Max Tegmark’s understanding of the multifaceted and ever-changing issues he discusses. Tegmark steers clear of ideology, addressing the big questions surrounding AI with a clear-eyed sense of wonder and a refusal to suggest – or to speculate – that any aspect of AI’s development is preordained or unchangeable. Larger philosophical questions compel Tegmark, who discusses AI’s application only as it might answer or confound those questions. His treatise is an indispensable primer for anyone fascinated by the likely interface of humans and machines. Tegmark’s classification of earlier eras of human development as Life 1.0 and 2.0 prove remarkably illuminating as a backdrop for comprehending the changes humanity now faces.