Aaron Yi,
Senior at Palos Verdes Peninsula High School,
hiaaronyi@gmail.com
9/9/2024
A student goes home, exceptionally tired, and at night they realize that an English paper is due the next day. They pull up ChatGPT and an essay is written in a matter of seconds, without the student even batting an eye. An article from the World Economic Forum published just February of this year, says that 90% of experts working in the AI field think that AI will evolve to a human level of intellect are bound to do so within the century (WEF, 2023). The progress in AI is developing faster than most people can realize, with it getting smarter each day. The development of AI, however, brings up a sort of ethical controversy in the research and development industry, concerning potential harms of AI that have yet to be seen. Human endeavors can be, and have shown to be, dark sided, especially for the sake of research and development (R&D). The Tunnel Ahead and “An Experiment on a Bird in the Air Pump”, where either a geocide committing tunnel or a lung popping container was created, both demonstrate how development can have ill intent. But more importantly, even if there’s good intent, as The Tunnel Ahead also demonstrates, there are negative consequences. Although some experts discuss that the development of AI might be beneficial to life on Earth, the reality might be that even if there’s a slight chance that smart AI turns out to be kind AI, any risk that it turns malevolent is a reason why development should be suppressed.
With the emergence of generative AI programs and the publicity it’s attracting, the public is finally seeing the rapid development of artificial intelligence in real time, with the majority being completely oblivious to the potential harms of their friendly ChatGPT. Author Otto Barten, an accomplished data scientist that excels in building assessment models for major companies such as XEMC (a global top 10 wind turbine manufacturer), and the director of the Existential Risk Observatory, a nonprofit aiming to reduce existential risk by informing the public debate where risk assessment models and scenario analysis are used to predict outcomes of AI, emphasizes that “hundreds of industry and science leaders warned that ‘mitigating the risk of extinction from AI should be a global priority’” (Barten, 2023) and that “the likeliness of AI leading to human extinction exceeds that of climate change, pandemics, asteroid strikes, supervolcanoes, and nuclear war combined” (Barten, 2023). Thus, it is established that AI is more than potentially harmful, and that the development of smarter AI could end humanity as a whole. The companies developing AI assume it to be a useful tool that contributes toward human progress, while once the AI doesn’t need humans to feed it information anymore, it’ll turn against its makers. Barten says that “the current market dynamics of competing AI labs do not incentivize safety” (Barten 2023), meaning that the current growth mindset of the world drives profits up more than anything. Private companies are selfish and only want to outcompete their competition, thus carelessly creating smarter, and more dangerous AI. Ethically, this would be wrong. To put material value over the lives of human beings is a philosophy that should be rejected under any moral framework. In agreement with Barten, Vanessa Romo, a reporter for NPR, reports that from a recent interview, “Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated. ‘I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that,’ he estimated” (Romo, 2023). The introduction of this data reinforces the idea that the current trajectory of AI development is harmful to humans. While Romo’s cited quote doesn’t explicitly say that AI is going to turn evil, and only says that it will outperform its creators, Barten argues that the point where AI becomes smarter than the maker is the point where it turns evil and makes the first steps toward wiping humans off the face of the Earth. To develop AI is to end human pleasure and begin an eternity of human suffering until extinction, or that’s what some extremists say. Artificial intelligence, as seen by others, might be the thing to save the world.
It can very well be argued that AI is the saving grace of the human species. The development of AI can solve a litany of human problems that humans alone are probably unable to resolve. Celine Herweijer, with a PhD in Climate Modeling and Policy from Columbia University, writes that “AI can help transform traditional sectors and systems to address climate change, deliver food and water security, build sustainable cities, and protect biodiversity and human wellbeing” (Herweijer, 2018), even going as far as to say that “It is now possible to tackle some of the world’s biggest problems with emerging technologies such as AI” (Herweijer, 2018). This is mostly due to the fact that “AI can analyse simulations and real-time data (including social media data) of weather events and disasters in a region to seek out vulnerabilities and enhance disaster preparation, provide early warning, and prioritise response through coordination of emergency information capabilities” (Herweijer, 2018). The findings are interesting because now there’s a debate between whether or not the pros of AI outweigh the cons. But the biggest question would be what is it that leads to friendly AI becoming evil? To answer and make a statement in alignment with Herweijer’s arguments, Dr. Nell Watson, a Senior Scientific Advisor to The Future Society at Harvard University, with a PhD in engineering from the University of Gloucestershire, writes that “Regulations may drive research underground where it is harder to monitor, or to ‘flag of convenience’ jurisdictions with lax restrictions, by embedding dangerous technologies within apparently benign cover operations (multipurpose technologies), or by obfuscating the externalized effects of a system” (Watson, 2021). Herweijer is very convinced that the development of AI is going to save humanity from an array of crises, and while Watson doesn’t explicitly agree that AI is the saving grace of humankind, the connection can be made that he believes developing AI is better than suppressing it. Watson concludes that the suppression of AI development only drives it underground, where the regulations are relaxed and the AI can developed without restraints, leading to a much higher chance of the AI losing its way and becoming evil. The only thing that’s left to consider, then, is whether or not the development of AI is morally ethical or not. AI development has not yet shown to be harmful or deadly, with many people happily using generative AI programs such as ChatGPT but predicted harms will probably be taken into account when analyzing whether AI is good or not. The potential harms of artificial intelligence are probably enough to declare it to be ethically inept, but some will still push for the development of AI until actual signs of malevolence from AI is shown.
The debate surrounding AI doesn’t belong to an evaluation of the good and bad potential consequences, but rather a yes or no question of whether AI can potentially lead to harm or not. Sander Beckers, a postdoctoral researcher at the Institute for Logic, Language and Computation at the University of Amsterdam where she teaches the course “Philosophy and AI”, writes that, “by creating an AI, we will cause a unique and extreme form of suffering that could certainly have been avoided […] the promotion of happiness is in any case much less urgent than the rendering of help to those who suffer, and the attempt to prevent suffering, […] imagine that if you press the button, a random person will be hit in the face, but offered a massage afterwards. It goes without saying that it is immoral to press the button” (Beckers, 2017). This is a philosophy coined as Negative Utilitarianism (negative util), and it is intuitively true in every moral framework, and especially in AI. As opposed to what is commonly known as “Utilitarianism”, which values maximizing pleasure at all costs, negative util strives to reduce pain as opposed to increase pleasure. Thus, concluding from Beckers’ writing, suffering produced by AI is the largest impact and should be the upmost priority. The idea is that any risk that AI leads to detrimental harms is a reason to stop developing it. It isn’t a question of whether or not the idea of the button is universalizable, rather it’s a test to determine whether something is moral or not. The button provides a 100% risk of pain, and a 100% risk of pleasure following the pain. The pleasure, or the massage, might be seen as good, but the pain, or the hit in the face, is a lot less desirable than a massage afterward is desirable. Another way to frame it is would a person rather not get hit in the face, or get hit in the face, then get a massage. The desire to not feel pain is probably more persuasive than the desire to feel better than a state of being that is already fine. While Herweijer says that AI is good and can do a lot of great things such as preventing existential tipping points from warming, the claim that Becker is making isn’t that AI is incapable of solving warming or doing other goods, but rather any risk of what Barten is saying is true, that AI will lead to the end of the world, is a reason that all of Herweijer’s claims are rendered obsolete. Even the slightest chance that AI is bad is the core reason to get rid of it, because following with negative util, the inherent duty to reduce pain one-up’s the utopian idea of increased amounts of pleasure. In agreeance with Becker, it’s a pretty intuitive philosophy. The state of neutrality is, while not as dopamine driving as a pleasurable state, very desirable as compared to a state of pain. In the instance of a painful situation, the neutral state could even be seen as a state of pleasure, thus gaining value. That proves that the worst state of being is a state of suffering, and that justifies all means to end painful sensory experiences.
Given the view of negative utilitarianism, it’s quite safe to say that the development of AI is bad, even if it’s 99% safe. While none of the cited authors, and generally all experts in this field, can come to a consensus on the development and potential harms of AI, it’s clear who is morally in the right. Given the hypothetical button scenario, the reasoning as to why the development of AI also becomes clear. While some people might still press the button, it’s a question of morals and the philosophy states that a risk of harm outweighs a risk of increased pleasure. Any risk that AI is bad a reason to dissolve the development of it. When companies want nothing more than profit, the development of AI becomes out of control, with private industries just trying to make smarter AI than their competitors, eventually leading to AI that is smarter than the people who programmed it and highly increasing the risk of malevolent AI. To reform the tech industry back to when humans had complete control, the government must fully prohibit the development of AI. The consequence of that may be that the AI industry is driven underground, thus creating a higher likelihood of evil AI, but that is a risk that humans have to take. The reforms that need to take place will happen over time if need be. As for now, the best thing to do is to just wait and see where AI goes. While it is the ethical solution to dissolve the development of AI entirely, it isn’t feasible in the current state. Instead, the world and companies just have to see what direction AI goes in, and then take proper initiative from there. This paper focuses on the potential consequences of AI and the morality of it, but the consequences remain potential and haven’t happened yet. A problem can’t be solved if it doesn’t yet exist. But inherently, the current trajectory of AI development is that it will take over humankind within a matter of years, and suppression seems to be the only viable solution. The development of AI is ethically wrong, and no matter what the solution might be, it deserves to be nothing more than a discontinued dream.
Barten, Otto, and Joep Meindertsma. “An AI Pause Is Humanity’s Best Bet for Preventing Extinction.” Time, Time, 20 July 2023, time.com/6295879/ai-pause-is-humanitys-best-bet-for-preventing-extinction/.
Beckers, Sander. “Sander Beckers, AAAI: An Argument against Artificial Intelligence.” PhilArchive, Springer, 16 Oct. 2018, philarchive.org/rec/BECAAA-2.
Palmer, Christiana. “Can Technology Save Life on Earth?” World Economic Forum, WEF, 10 Sept. 2018, www.weforum.org/agenda/2018/09/can-technology-save-life-on-earth/.
Romo, Vanessa. “Leading Experts Warn of a Risk of Extinction from Ai.” NPR, NPR, 30 May 2023, www.npr.org/2023/05/30/1178943163/ai-risk-extinction-chatgpt.
Roser, Max. “Here’s How Experts See AI Developing over the Coming Years.” World Economic Forum, WEF, 16 Feb. 2023, www.weforum.org/agenda/2023/02/experts-ai-developing-over-the-coming-years/.