Boomers, Doomers, and Artificial General Intelligence
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. By Adam Becker. Basic Books, New York, NY, April 2025. 384 pages, $32.00.
Artificial intelligence (AI) has taken the world by storm, especially given the arrival of ChatGPT in November 2022. The resulting social discourse is split between two different ideological groups that science journalist Adam Becker calls boomers and doomers in his new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. Becker defines a “boomer” as someone who believes that AI will transform society into a world of great prosperity and human flourishing. In contrast, a “doomer” feels that AI signals the end of humanity; the associated probability for this mindset is “p(doom).” According to Becker, boomers and doomers are often two sides of the same coin, in that members of both camps populate OpenAI, Google, and other Silicon Valley companies that serve as the powerhouses behind AI technologies like ChatGPT.
Becker’s perspective is somewhat similar to investigative reporter Karen Hao’s account of OpenAI’s troubling culture and subsequent consequences in her recent text, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI [2]. However, his narrative focuses specifically on the intellectual and philosophical currents of boomers versus doomers. In particular, Becker identifies two distinct but related viewpoints—effective altruism (EA) and rationalism—that constitute the core of his book.
The central figure of the rationalist movement in More Everything Forever is Eliezer Yudkowsky: a self-educated maverick who is based in Silicon Valley. Per Becker’s account, Yudkowsky was an unusual child who found his calling in artificial general intelligence (AGI), or superintelligence. In 2000, he founded the Singularity Institute for Artificial Intelligence (SIAI) to hasten the arrival of AGI and singularity: the point at which AI would surpass humanity in its power and lead to a flourishing society. But several years later, Yudkowsky’s outlook changed; he now felt that AGI would be an “existential threat” to humanity unless it properly “aligned” with human values. He subsequently decided to change the name of SIAI to the Machine Intelligence Research Institute (MIRI) and created a blog called LessWrong that focuses on AI’s perceived existential threat. Contributors to the blog started their own blogs, some of which became very influential — including Scott Alexander Siskind’s Slate Star Codex (now Astral Codex Ten).
The leading argument of this group was captured by philosopher Nick Bostrom in his famed paper clip thought experiment. Consider a hypothetical AI system that is trained to maximize the production of paper clips, a seemingly harmless goal. But as it becomes “superintelligent,” it creates instrumental subgoals to optimize its fulfillment of the original goal. One such subgoal might be to eliminate all of humanity, since humans consume resources that could otherwise go towards the production of paperclips. Rationalist enthusiasts have since envisioned many scenarios wherein misaligned AI systems could destroy humanity — none more voluminous and bizarre as the recently released AI 2027 report, which predicts that “the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution” [3]. In September, Yudkowsky (together with coauthor Nate Soares) released a book that was featured on The New York Times Best Sellers list the following month [5].
Conversely, EA is the brainchild of philosopher William MacAskill and his colleagues. EA advocates for charitable donations in a manner that most effectively helps humanity. At first glance, this idea seems undoubtedly admirable; who would object to optimized altruism and good deeds? Perhaps the most high-profile adherent to this idealism was cryptocurrency entrepreneur Sam Bankman-Fried. Once hailed as a great entrepreneur and philanthropist, Bankman-Fried was found guilty of massive fraud in November 2023 after his FTX cryptocurrency exchange spectacularly collapsed and he filed for bankruptcy the previous year; in March 2024, he was sentenced to 25 years in federal prison.
The collapse of Bankman-Fried’s company highlights some of the basic problems with the EA movement. For example, EA supporters use a strange, deranged version of utilitarianism to realize their values. They believe that the most effective way to do good is to secure a lucrative job, earn as much as possible, and give away as much as possible to worthy causes. MacAskill offered Bankman-Fried this very advice when the latter was a student at the Massachusetts Institute of Technology; it ultimately led him down the path to FTX and eventual fraud.
Throughout More Everything Forever, Becker thoroughly demonstrates the absurdity of the EA enterprise, with its cavalier exercises in crude utilitarian calculations and ridiculous hubris of imagining that one can presently determine the best long-term course of action for all of humanity. To elaborate, EA is associated with a movement called longtermism. The concept may initially sound somewhat banal; the lives of future humans are as important as ours, so we have a responsibility to bequeath a better world to the next generation. This is hardly a new insight, as many environmental activists regularly call upon this moral doctrine. But longtermism—as espoused in MacAskill’s book, What We Owe The Future [4]—is not concerned with the next 100 or even 1,000 years, but rather millions of years in the future. A consequence of this utilitarian calculus of the welfare of trillions of future humans is a singular focus on existential risk. In a 2013 paper, philosopher Nick Bostrom writes that given a one percent chance of quadrillions of people existing in the theoretical future, “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth one hundred billion times as much as a billion human lives” [1]. The single-minded focus of longtermism is thus to increase the odds, however slightly, of humanity’s long-term survival. And according to EA philosophers, the single biggest threat to this goal—far greater than nuclear Armageddon or climate change—is AGI. This focus on AGI as the most serious threat to humanity’s survival merges the EA and rationalist movements into one.
The penultimate chapter of More Everything Forever focuses on the opposite viewpoint: the boomers or accelerationists who are represented by billionaires like Marc Andreessen and Jeff Bezos. Andreessen—a businessman, venture capitalist, and former software engineer—seeks to throw caution to the wind and use as much energy as possible to reach AGI, while Bezos—founder and former chief executive officer (CEO) of Amazon—wants to colonize space to produce thousands of modern-day prodigies.
Several common threads appear throughout Becker’s book. First, the EA and rationalist movements primarily attract philosophers and computer software gurus. Additionally, many of their arguments are based on philosophical puzzles that more closely resemble science fiction rather than hard science and substantive evidence. In fact, Becker humorously notes that several of the key players in his narrative are long-term fans of science fiction. Because these individuals are often close to very important centers of power and policy, their somewhat distorted ideologies can quickly gain traction. For example, Dominic Cummings—who served as chief advisor to former U.K. Prime Minister Boris Johnson—is a champion of EA.
But how do these groups come to have such an inflated influence? Becker addresses this apparent contradiction in his final chapter:
Without billionaires, fringe philosophies like rationalism and effective accelerationism would stay on the fringe, rather than being pulled into the mainstream through the reality-warping power of concentrated wealth.
The EA movement and its institutes at the University of Oxford are funded by billionaire money—i.e., the Open Philanthropy fund—while Yudkowsky’s institute was backed by both Open Philanthropy and venture capitalists such as Peter Thiel. Similarly, OpenAI was initially backed by its co-founder Elon Musk and CEO Sam Altman, who has close connections to a vast network of Silicon Valley billionaires [2]. Given the alleged problematic impact of such billionaires on contemporary society, Becker outlines several measures that may ultimately inspire a world that is free of their influence. This type of important discussion needs broader attention and a wider audience, and More Everything Forever is a valuable contribution to the ongoing conversation.
References
[1] Bostrom, N. (2013). Existential risk prevention as global priority. Glob. Policy, 4(1), 15-31.
[2] Davis, E. (2025, October 1). OpenAI: Extraordinary accomplishments, but at what cost? SIAM News, 58(8), p. 11.
[3] Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025, April 3). AI 2027. Retrieved from https://ai-2027.com.
[4] MacAskill, W. (2022). What we owe the future. New York, NY: Basic Books.
[5] Yudkowsky, E., & Soares, N. (2025). If anyone builds it, everyone dies: Why superhuman AI would kill us all. New York, NY: Little, Brown and Company.
About the Author
Devdatt Dubhashi
Professor, Chalmers University of Technology
Devdatt Dubhashi is a professor in the Data Science and AI Division at Chalmers University of Technology in Sweden. He earned his Ph.D. in computer science from Cornell University and has held positions at the Max Planck Institute for Computer Science in Germany and the Indian Institute of Technology Delhi.

Stay Up-to-Date with Email Alerts
Sign up for our monthly newsletter and emails about other topics of your choosing.


