The artificial intelligence (AI) debate has been largely framed by the influential philosophy of longtermism, which holds that humanity’s focus should extend beyond the present moment and into the distant future. This philosophy has been a driving force in discussions around AI, with proponents arguing that the potential for human extinction should guide our approach to AI development.
Critics raise alarms
However, a growing chorus of critics is challenging the longtermist perspective, labeling it as dangerous and detracting from more immediate AI-related concerns such as biased algorithms and data theft. Emile Torres, a former advocate of longtermism who has since become a critic, goes so far as to compare the philosophy to past ideologies used to justify atrocities like mass murder and genocide.
Longtermism’s core principles and critiques
Longtermists maintain that our ethical responsibility extends to the far future, envisioning a scenario where trillions of humans inhabit space and new worlds. This perspective argues that each of these future individuals deserves the same consideration as present-day humans, given their sheer numbers.
The critique highlights “dangerous” thinking
Critics like Torres warn that such thinking becomes perilous when combined with a utopian vision of the future and utilitarian moral frameworks that might justify extreme actions. Torres points out that idealizing a future with boundless value and the notion that the ends can justify the means can lead to dangerous consequences.
The Stakes and Influencers
The stakes in this debate are high, with figures like Elon Musk and Sam Altman of OpenAI signing open letters asserting that AI could potentially lead to human extinction. However, skeptics note that these individuals stand to gain from the promotion of their AI products as potential saviors.
Influential figures and ventures
The longtermism movement, along with related ideologies like transhumanism and effective altruism, wields substantial influence in academia, particularly in institutions like Oxford and Stanford, as well as within the tech sector. Venture capitalists, including Peter Thiel and Marc Andreessen, have invested in endeavors linked to these movements, such as life-extension companies.
Longtermism, Transhumanism, and Historical Connections
Longtermism’s foundation traces back to the work of philosopher Nick Bostrom, who explored existential risks and transhumanism in the 1990s and 2000s. Transhumanism, the idea that technology can enhance humans, has historical ties to eugenics, a controversial theory aiming to improve the human population through selective breeding.
Critics, including academic Timnit Gebru, have pointed out these historical connections, alleging that transhumanism and by extension, longtermism, share parallels with eugenics. They argue that this raises ethical concerns given the dark history associated with eugenics and its potential negative implications.
The Complex Persona of Nick Bostrom
Accusations and Controversies
Nick Bostrom, a central figure in the longtermist movement, has faced accusations of endorsing eugenics due to his writings on “dysgenic pressures.” Despite these allegations, Bostrom has distanced himself from the term, apologizing for past racist posts made in the 1990s and clarifying his stance against eugenics as commonly understood.
Shifting focus, critics have expressed concerns.
In the face of the dominance of longtermism and related ideologies, critics like Emile Torres and Timnit Gebru are advocating for a shift in focus. They argue that pressing issues such as the exploitation of workers, biased algorithms, and wealth concentration warrant greater attention than the apocalyptic narrative of human extinction.
The profit motive
Torres highlights a profit-driven motive behind the emphasis on human extinction discourse. They argue that discussions around catastrophic scenarios capture attention more effectively than issues like labor exploitation or corporate dominance, even though the latter may have more immediate real-world implications.
The battle of philosophies is a complex AI debate.
The clash between longtermism proponents and their critics underscores the complexity of the AI debate. While longtermism’s focus on the far future has shaped discussions around AI development, critics argue that this focus might overshadow immediate challenges that demand attention. As the AI landscape continues to evolve, striking a balance between long-term ethical considerations and addressing present-day issues remains a pivotal challenge.