Boston Academic Contends Catastrophic AI Anxieties are Overblown and Misdirected


  • Existing AI applications lack the capacity to cause large-scale catastrophes; concerns about AI enslaving or destroying humanity are overblown.
  • The true danger of AI lies in its potential to erode essential human qualities, such as judgment-making, serendipity, and critical thinking skills.
  • Thoughtful integration of AI is crucial to prevent the gradual impoverishment of human existence and ensure that AI enhances rather than diminishes our way of being.

Academics at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and Nir Eisikovits particularly believes these catastrophic anxieties are overblown and misdirected. While there are big human problems (not caused by AI, but by humans), they require the attention of policymakers. Problems have been around for a while, so the reasoning goes, and they are hardly cataclysmic.

In recent months, the rise of AI, exemplified by systems like ChatGPT, has sparked widespread anxiety about its potential dangers. Some have even expressed existential fears and catastrophic scenarios, likening AI to pandemics and nuclear war. Nir particularly offers the concept that the true danger lies in the subtle erosion of essential human qualities, prompting a reevaluation of what it means to be human in the age of AI.

Challenging catastrophic scenarios

While thought experiments like the “paper clip maximizer” have been used to illustrate potential risks, the reality is that existing AI applications are far from possessing the capability to cause large-scale catastrophes. These scenarios, while intriguing, belong more to the realm of science fiction than imminent threats. AI systems today focus on specific tasks and lack the sophisticated judgment and access to the critical infrastructure required for the extreme examples put forth.

Rather than cataclysmic events, the true danger lies in the gradual transformation of human existence through the increasing integration of AI. Existing AI technologies have already demonstrated their potential for harm, such as the creation of convincing deep-fake media and the perpetuation of algorithmic bias in decision-making systems. These issues necessitate attention and regulation from policymakers, but they are not existential threats to humanity.

The diminishing of human qualities

Professor Nir Eisikovits argues that the existential danger posed by AI is of a different nature – it is a philosophical risk. AI has the potential to alter how individuals perceive themselves and can gradually erode fundamental human abilities and experiences. One such ability is judgment-making, a trait deeply ingrained in human nature. As more judgments are automated and delegated to algorithms, people may lose the capacity to make these judgments themselves, leading to a decline in their ability to reason and make informed decisions.

Another crucial aspect of human existence that AI impacts is the role of chance and serendipity. Humans value unexpected encounters and the element of surprise in their lives, yet algorithmic recommendation systems aim to minimize such serendipitous experiences by relying on predictability and planning. The gradual replacement of chance with algorithms could rob individuals of meaningful and unexpected discoveries.

Furthermore, the advancement of AI’s writing capabilities raises concerns about the decline of critical thinking skills. If AI technology replaces writing assignments in higher education, educators may lose a valuable tool for teaching students how to think critically and express themselves effectively.

The importance of considered integration

While AI does not pose an imminent catastrophic threat, the uncritical adoption and integration of AI in various domains do carry consequences. Prof. Eisikovits warns that if these developments continue unchecked, human skills, such as judgment-making, the appreciation of chance encounters, and critical thinking, will be diminished over time. Although the human species will survive these losses, the quality of human existence will be impoverished as a result.

In the midst of rising anxieties surrounding AI, it is crucial to differentiate between exaggerated catastrophic scenarios and the philosophical risks associated with its integration into society. While AI is unlikely to bring about an apocalyptic end, the erosion of essential human qualities is a genuine concern. By recognizing and addressing the subtle costs of AI, policymakers, researchers, and society at large can ensure that AI technologies are thoughtfully integrated to enhance human existence rather than diminish it.

1516348028219 1
Prof. Nir Eisikovits, Founding director of the Applied Ethics Center

Nir Eisikovits is a professor of philosophy and founding director of the Applied Ethics Center. Before coming to UMass Boston he was associate professor of legal and political philosophy at Suffolk University, where he co-founded and directed the Graduate Program in Ethics and Public Policy. Professor Eisikovits’s research focuses on the moral and political dilemmas arising after war. He is the author of “A Theory of Truces” (Palgrave MacMillan) and “Sympathizing with the Enemy: Reconciliation, Transitional Justice, Negotiation” (Brill) and co-editor of “Theorizing Transitional Justice” (Routledge). He is also the guest editor for a recent issue of Theoria on “The Idea of Peace in the Age of Asymmetrical Warfare.” Read more about him.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Glory Kaburu

Glory is an extremely knowledgeable journalist proficient with AI tools and research. She is passionate about AI and has authored several articles on the subject. She keeps herself abreast of the latest developments in Artificial Intelligence, Machine Learning, and Deep Learning and writes about them regularly.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan