57 pages • 1 hour read
Max TegmarkA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Review the story of the Omega Team and Prometheus. What are the potential unspoken ethical issues in this story? Does the Omega Team handle its power responsibly? Should Prometheus be restrained? Explain why or why not.
As the title of Chapter 1 indicates, Tegmark believes that the advancement of AI technology and questions concerning AI safety are part of “the most important conversation of our time (22). Why does he think this? Is he right? Why?
In his discussion of AGI, machine learning, and the advancing capacities of AIs to perform tasks previously only achievable by humans, Tegmark shows that there are some tasks AIs can already do better than humans, but that there are others, like art and scientific research, that AIs still find difficult. What has changed since the publication of this book in 2017? Do you think there are some things AI will never be able to do? Why or why not? Is there anything that makes humans special or exceptional? How so?
In Chapter 3, Tegmark discusses many of the ways that AIs could transform the economy, manufacturing, warfare, finance, and the legal system in the near future. How do you think AI might directly affect your life or work? Does it excite or worry you? Why?
At the beginning of Chapter 5, Tegmark asks his readers to review a series of seven questions, briefly write down their answers, and return to them at the end of the chapter. Having read Tegmark’s book, do you think differently about any of these issues? What specifically? Why or why not?
Think about the possibility of an “intelligence explosion.” What do you think this would mean for the future of humanity? Do you welcome or fear a future of superintelligent, high-powered artificial creatures? Do you think humans should try to control AIs? Do you think this will become impossible? Why?
Tegmark provides a substantive list of “AI Aftermath Scenarios” at the outset of Chapter 5, many of which he subsequently describes in hypotheticals. Which of these do think is the most likely? Why? Which scenario do you think is the most preferable? Why? Are there any scenarios that you think Tegmark neglected? What are they? Check out the poll results at Tegmark’s AgeOfAi.org. What do you make of others’ responses?
In Chapter 7 Tegmark questions the nature of goals (biological, physical, conscious, etc.). What is the tension between goal-retention and the pursuit of an ultimate goal? What should the ultimate goal be? Should there be any ultimate goal? Why or why not?
In the final chapter, Tegmark discusses various levels of questions on consciousness, from the “easy” questions on brain processes, to the “really hard” question that asks why anything is conscious. Note Figure 8.1, which Tegmark uses to diagram the relationship between various questions about the nature of consciousness. What philosophical presumptions undergird Tegmark’s study of consciousness? Are these justifiable? Why or why not? Independently speculate on the “really hard” problem of consciousness. How would you answer this? Why?
In the Epilogue, Tegmark includes “The Asilomar AI Principles,” a list of a list of research problems, long-term problems, and values democratically agreed upon by at least 90% of participants in a 2017 conference on the future of AI safety. Evaluate this list. On its basis, would you deem the conference a success? Do you think it’s a worthy basis for future research and ethics? Are there any major oversights? If so, what are they?