57 pages • 1 hour read
Max TegmarkA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
One of the major goals of Tegmark’s book is to explore the possible futures that humanity can share with AI, cyborgs, and digital lifeforms. Rather than limit his investigations to a specific theory, Tegmark uses Life 3.0 to explore a number of possibilities in areas including technological advancements, space exploration, and political systems. Some of the possibilities utopian, others dystopian, but none is as the logical outcome of our current trajectory. It is important to allay humanity’s fears about the future of AI since so many cultural messages emphasize the uncertainty of the future as an opportunity for things to go wrong. Tegmark uses imagination and scientific inquiry to try to steer our beliefs about the future and the future itself in the right direction.
In Chapter 5, “Aftermath: The Next 10,000 Years,” Tegmark includes (in Table 5.1) a number of long-term possible futures, which he calls “AI Aftermath Scenarios” (162). For many of these he provides detailed speculative fictions that bring to life the advantages and disadvantages of the respective situations. Never does Tegmark explicitly state which scenario he prefers or why, but rather he provides a sense of what the possibilities are and how they may be developed or avoided. He states why some of these are more or less likely and discusses who would be pleased and displeased by the respective outcomes.
Not everyone, according to Tegmark, shares his openness regarding possible futures. There are many in the AI community, he claims, that have rigid views about what the future of human and AI life will look like. He writes:
In terms of what will ultimately happen, you’ll currently find serious thinkers all over the map: some contend that the default outcome is doom, while others insist that an awesome outcome is virtually guaranteed. To me, however, this query is a trick question: it’s a mistake to passively ask ‘what will happen,’ as if it were somehow predestined! (159).
Even though Tegmark seems sympathetic to the view that the universe is more goal oriented as time goes on, he is not dogmatic on this point and realizes that the future is essentially open. Philosophically, he rejects fate, and epistemologically, he rejects knowledge of what that fate would be. Instead, he believes that whatever possibility becomes reality will be the result of human decisions in the near-term future. It’s crucial for him that we not only know what the possibilities are but also that we actively determine which possibilities we prefer and why. Speculation, then, is not idle but useful: “If we don’t know what we want, we’re less likely to get it”
All of the scenarios assume that AI will continue to develop, potentially at a breakneck pace. Tegmark writes, “we can’t dismiss the possibility that AGI will eventually reach human levels and beyond” (132). While he is not a Digital Utopian, Tegmark is also definitely not a Techno-Skeptic. He does not believe that superintelligence is a necessary development nor does he think that AGI will never surpass the human level. Tegmark’s views on the future are connected with his desire to promote the beneficial AI movement, specifically in the areas of safety and ethical responsibility. He does not believe that the future of advanced AI is inherently good and inevitable, nor does he believe that pursuing AI advancements beyond what we can currently conceptualize is pointless. As an effective altruist, he believes that it is only through openness to possible futures that we can use AI’s capabilities for the greater good.
At the 2017 conference on AI safety that Tegmark describes in the epilogue, hundreds of AI researchers, business professionals, and others assembled to discuss the future of beneficial AI. At the end of their conference, they collaborated on a number of ethical principles, and their main determination was that “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” (329). AIs should be created that have particular, human-centric moral inclinations and not be developed toward amoral ends. It is not enough that AI not be aligned toward malfeasance; it must be actively engineered for human betterment. The main reason for this is that undirected AI may develop goals that are misaligned with our own. While it would be false to call such AIs evil, they could clearly be at odds with what humanity wants for itself or needs for its future. Such AIs could do terrible things, as has so often been depicted in dystopian sci-fi. It is crucial that AIs be very carefully forged and that this be done immediately because the further that AIs progress, the more autonomous they are liable to become.
Although there is broad agreement within the beneficial AI movement that the move towards beneficence is crucial, there’s substantial disagreement, according to Tegmark, on nearly everything else. “The one thing everybody agrees on,” he writes, “is that the choices are more subtle than they initially seem. People who like any one scenario tend to simultaneously find some aspect(s) of it bothersome. To me, this means that we humans need to continue and deepen this conversation about our future goals, so that we know in which direction to steer” (201). The various utopian scenarios that Tegmark proposes offer something exciting but also significant drawbacks. The desirability of each scenario is dependent on individual preferences, including fundamental moral views and political ideologies. Though Tegmark does not defend any particular scenario against the others, he is also not content to let our individual differences rest in absolute relativism. The goal is a conversation that engages addresses potential controversies to be better prepared for all possible results. The sake of this conversation is not simply so that we will all understand one another better but so that we can mediate amongst these various goals and establish broad consensus regarding shared principles and then work to integrate these into the development of AI programs.
In Chapter 7, Tegmark discusses four ethical principles on which there is potential for consensus: utilitarianism, diversity, autonomy, and legacy (271). Each of these has a role in ethical decision-making, not just for the future of benevolent AI, but for all areas of society it affects. These are not principles that he prescribes but rather that he believes we already broadly agree upon (271). By clarifying what the broad consensus is, and that the general aim is toward the “survival and flourishing” of human societies, Tegmark hopes to establish clear and explicit bases for an uncontroversial, generally agreeable, and uncontentious debate on proper AI safety and development.
These principles directly relate to basic philosophical questions: “To wisely decide what to do about AI development, we humans need to confront not only traditional computational challenges, but also some of the most obdurate questions in philosophy. To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident. To program a friendly AI, we need to capture the meaning of life” (279). These philosophical and moral challenges can lead to disagreement because the answers do not come from technical study or advancement. They are related to subjective values, social norms, and diverse cultural attitudes toward technology and religion. As in all areas of the topic of AI, Tegmark does not answer these questions but provides thought-provoking stories and facts that can help both scientists and average people clarify their ideas with the goal of creating points of consensus.
As discussed at length in Chapter 8, Tegmark’s view of consciousness is subjective experience, what it is like to be or do a thing (283). Qualia are the perceptual components of experiences of which one is conscious, rather than a mind-independent reality. When it comes to AI, the question of consciousness is whether it can sense the experience of being or doing a thing, rather than just being or doing it. The development of AI consciousness calls into question the requirements of the physical requirements for consciousness, as well as the very nature of it: Does something require a body to be conscious? Can the same consciousness move through various physical forms? How might AI process qualitative versus quantitative experiences?
Tegmark is deeply concerned with the proliferation of conscious life. He thinks this is of the most fundamental ethical and spiritual significance:
Of all traits that our human form of intelligence has, I feel that consciousness is by far the most remarkable, and as far as I’m concerned, it’s how our Universe gets meaning. Galaxies are beautiful only because we see and subjectively experience them. If in the distant future our cosmos has been settled by high-tech zombie AIS, then it doesn’t matter how fancy their intergalactic architecture is: it won’t be beautiful or meaningful, because there’s nobody and nothing to experience it (184).
In short, the universe is meaningful because we have experiences of it. Tegmark does not require a deity to endow beings with consciousness, nor is there any indication that consciousness (or some other nonphysical reality) is a fundamental aspect of the universe. It is, rather, a particular achievement of particular arrangements of information emergent from various physical substrates. Therefore, for Tegmark, without something like human consciousness to provide experience, the universe is nothing more than an enormous, meaningless shell. We, then, are uniquely situated to bestow the universe with meaning. This, in Tegmark’s view, gives humanity cosmic significance of the highest form. It is up to humanity to populate the universe with intelligent life and give it meaning.
Dedication to the advancement of conscious life one of humanity’s main tasks: “Since there can be no meaning without consciousness, it’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe” (314). Anthropocentrism, the view that human beings are of central importance, is a fundamental aspect of this view, even as it could lead beyond humanity to another kind of lifeform. The development of superintelligent conscious life is, from one perspective on this view, a vocational calling for all of humanity.
Tegmark is unabashed about the grand future possibilities of humanity (and human descendants, like cyborgs and AIs). He explores the idea of the proliferation of conscious life throughout the cosmos, the complexity that such life could reach (both in technological capacity and intelligent, conscious experience), and truths of the universe that could be achieved by such life which are, at the moment, unfathomable.
A potential problem with Tegmark’s outlook is its missionary-like urgency. He believes in the colonization of the cosmos, which could lead—at its radical end—to the nearly religious pressure to expand and enrich consciousness at all costs. This could clash with his views on AI ethics and safety, assuming that conscious AIs will be part of humanity’s ability to develop consciousness throughout the universe in the future.