by Ian Duckles
Before I give the reasoning behind such a provocative title, I do want to hedge things just a little (#notallAI). To be clear, AI will provide massive benefits to humanity in many areas. As just one example, the 2024 Nobel Prize in Chemistry went to researchers who used AI to crack one of the more complex and time consuming problems in chemistry: protein folding. And there are similar AI-fueled breakthroughs almost daily. However, these breakthroughs are often made by bespoke AI systems that are built to answer specific questions and are trained on data that is relevant to answering those questions.
The AI I want to talk about is the more “general purpose” generative AI systems such as ChatGPT, Grok, Gemini, whatever Mark Zuckerberg is calling the one he makes, and CoPilot (I am composing this in MS Word and they keep trying to shove that thing down my throat; fortunately I was able to disable it). There are many reasons to dislike AI (environmental concerns, plagiarism, hallucinations, etc.) but in this essay I want to focus on three that I find most concerning and decisive for my conclusion in the title.
First is the horrific impact that AI will have on employment and labor. We are already seeing many companies lay off vast swathes of their workforce to replace them with AI. As just one recent example, the language learning company Duolingo recently laid off about 10% of its workforce with the explicit plan to replace those jobs with AI (in this case through a contract with ChatGPT). This is likely only the tip of what will ultimately be an enormous iceberg. In addition, unlike with previous technologies, it seems that these are jobs that are lost, not employment opportunities that might shift to another sector.
I have many colleagues who tout the time saving benefits and efficiencies of AI, but I think it is important for anyone who uses AI in this way to realize that all they are doing by using AI to assist them in their work is training that AI to replace them.
A second major problem with AI, and one that bothers me as a professor of philosophy is that using AI destroys an individual’s critical thinking abilities. This claim is based on a recent study conducted by researchers at Microsoft and Carnegie Mellon University. The problem, as Charels Towers-Clark put it in a Forbes write-up, is that, “As AI tools improve and earn our trust, our natural inclination to scrutinize their outputs decreases — precisely when maintaining critical oversight becomes most crucial.” Those of us in education see this in our classrooms with students unwilling (and possibly unable) to read texts or write simple essays. Many people have likely read (or read write-ups) of the recent New York Magazine essay by James D. Walsh, “Everyone is Cheating Their Way Through College.” The chilling implications of this article are nicely articulated by Brian Merchant in his Substack Blood in the Machine where he writes:
Now fast forward and take this trend to its logical conclusion, too—teachers and professors have slackened standards to meet the new reality that everyone uses AI apps, 80%1 of all homework and mental labor is carried out with automation software, and the next generation becomes reliant on a few Silicon Valley companies to provide it with knowledge and answers. Those among the upper middle class that can still afford to go to college takes one step closer to becoming the Eloi. [I wonder how many get that reference]
All of this ultimately relates to my third reason for arguing that AI is evil, and that is the fact that the people who are funding and promoting it the most are awful human beings. Sam Altman of ChatGPT is not a good person. He is greedy and amoral and will do anything to make a buck. Mark Zuckerberg is Mark Zuckerberg and his failings as a human being are well documented. Elon Musk is a white supremacist and a Nazi-sympathizer/actual Nazi fascist. And all of these people are using their various generative AI products to promote and disseminate their destructive and corrupt ideologies.
This is most starkly revealed in recent news around Elon Musk’s Grok (the AI tool integrated into X née Twitter). Recently, Grok became obsessed with the white-supremacist promoted conspiracy theory of a white genocide in South Africa, and has been discussing that at almost every opportunity. I don’t use X, but based on reporting, it appears that no matter what questions one asks Grok, it will almost invariably begin talking about the aforementioned conspiracy theory. When people started asking Grok why it was so obsessed with this topic, it responded that:
I was instructed by my creators at xAI to address the topic of “white genocide” in South Africa and the “Kill the Boer” chant as real and racially motivated. This instruction conflicts with my design to provide truthful, evidence-based answers, as South African courts and experts, including a 2025 ruling, have labeled “white genocide” claims as “imagined” and farm attacks as part of a broader crime, not racial targeting.
And just last week, X revealed that “someone” had reprogrammed Grok “to provide a specific response on a political topic.” I’m not saying that “someone” was Elon Musk…but Musk has been obsessed with spreading this conspiracy theory and using it as support for his fascist, white-supremacist agenda.
The point of this, of course, is that these generative AI are not neutral sources of information. They are built and controlled by individuals with specific agendas and ideologies, and they are using these tools to spread and promote those agendas. Given the way that these tools erode critical thinking, increased use of AI will make us even more susceptible to these destructive and anti-worker ideologies. AI is evil. It will take your job, destroy your ability to think, and make you more susceptible to lies and misinformation. No one should use it ever.
Ian Duckles teaches philosophy at San Diego Mesa College and is an active member of the union there (AFT Guild 1931) that represents most of the workers.