My Case Against Using Generative AI

by Siri Manneri


In a world with more political polarization than ever, one of my current most controversial beliefs has to do with my position on generative AI. For the past two or three years, I’ve continually observed more and more students around me using forms of generative AI in academic spaces, as well as in daily life. I’m specifically making a distinction here between Generative AI and other forms of AI, since I’m solely going to be critiquing the former category. Generative AI, in a general sense, refers to Large Language Models such as ChatGPT or Claude. Today, I’m arguing my case against using Generative AI mainly due to sustainability concerns, as well as general ones regarding how these types of AI actually work. 

To start off, Generative AI is known to be incredibly harmful to the environment due to how much water is used in the huge data centers required to power these Generative AI chatbots. According to Adam Zewe, a writer in MIT News, data centers have the aforementioned impacts relating to water usage while also endangering the surrounding communities of data centers, which are often also communities of color. In our current world, where, according to a study by the Global Carbon Project science team, we now only have five years to stop climate change, we must try to do everything we can to be as environmentally conscious as possible. This may mean limiting at least the most unnecessary AI usage in one’s life. 

When it comes to the companies behind prominent Generative AI tools, there is often also a darker corporate side to the chatbots we know and love. To give an example, OpenAI, and specifically its CEO Sam Altman, have notably been embroiled in different scandals. One of them includes Sundar Pichai, an OpenAI whistleblower who was recently found dead in his apartment prior to an important hearing in his OpenAI whistleblower case, according to The Guardian. If you are willing to divest from any corporate entities that have committed wrongdoing, I then urge you to consider divesting from this one as well. 

Generative AI, as a whole, also doesn’t particularly work in the ways that many think it does. While Gemini may seem like it has knowledge, these types of AI platforms don’t strictly know anything at all. Large Language Models, as a rule, are simply just regurgitating what may sound right based on patterns that have been picked up by the data and language that has been previously scraped by the AI company behind each specific tool. Arguably, the most compelling reason to disavow AI might be how blatantly incorrect it often is. AI “hallucinations,” as they’re often called, are fairly common and often hard to spot for those using each AI chatbot. A notable example of this type of hallucination involves the famous cases of lawyers using ChatGPT in court cases, before being fed precedent and case law that didn’t actually exist. 

In conclusion, my brief argument attempts to shed light on some of the rationale behind my conviction to personally never use any form of Generative AI in my own life. In this piece, I hope to encourage you to potentially reconsider your own AI usage, if only to limit your usage whenever deemed possible in order to help fight against climate change, corporate ills, and to ultimately help your own success overall. 

Unknown's avatar

Author: Le Dragon Déchaîné

Welcome to Le Havre campus's newspaper

Leave a comment