top of page

Is AI a Wonder Drug or Just a Placebo?

  • Writer: Narusorn (Noah) Lindsay
    Narusorn (Noah) Lindsay
  • Oct 20
  • 5 min read

Opinion by Zaine Ahmed, Grade 10


Chatgpt is many things to many people: a boyfriend, a savior, a tool, a burden, and an interest, but no matter what people may think, it is not intelligent. In fact, chatgpt can’t even think for itself at all. So if it can’t think for itself, what is it?


ChatGPT and other so called “AI” websites and tools are really just LLMs, or Large Language Models. The term “AI” was actually coined in the mid 2010’s to increase shareholder interest and was made popular in the hopes that there would be more money invested in companies creating them. (The History, Timeline, and Future of LLMs). Essentially, to simplify what an LLM does, it, in theory, is simply an advanced autocorrect. You know, when you’re typing on a phone, and just above the letters there's three options for words that the phone assumes that you’re going to type next? That is essentially what GPT and other LLMs’ are. To get to the quality it has today, GPT takes information from all corners of the internet and studies it in two forms of “Learning”. The first kind is known as “unsupervised learning” and it essentially is where you shove data from every corner of the internet into GPT so it can get a rough idea of what goes where.


Think of it like giving a toddler a piece of candy, sure they might eat it, but they also might get it all over their hands, face, and everything in between. It’s just the stage where gpt gets a general overview of what anything and everything is (The History, Timeline, and Future of LLMs). The next stage is supervised learning, essentially where GPT gets streamlined into something that generally knows what goes where and how. Think of it as teaching a toddler, “hey, that candy, it goes in your mouth, not on your chin.” essentially showing the LLM “correct” answers. (The History, Timeline, and Future of LLMs) With the sentence “The sky is _____”, the unsupervised learning would teach the AI that the sky could be any colour, but the supervised learning would “teach” the AI to make use of context clues, such as this line taking place in a fairy tale, to allow the AI to finish the sentence as “The sky is blue”. Now that an overview on what GPT really is, it’s now time to talk about its dangers


Critical thinking


LLM’s are vastly used in schools, colleges, and workplaces. From helping with outlining a lesson, writing an E-Mail, and doing homework, to completing exams, doing complex mathematics (that it often gets incorrect) and writing reports. LLMs like GPT hit schools like a truck, forcing schools to rapidly grow in technological capabilities to try and keep up with the increased ability to cheat on exams, do homework and write essays for students. Schools all over the globe have needed to be changed drastically because of the increased capabilities of LLMs. (Sullivan). This has led many across the globe to wonder-especially those in administrative and teaching roles with major interaction with the youth-is it ruining our critical thinking? The resounding answer to this question is in fact, yes.


A study completed in the United Kingdom recently looked at about 650 people from all different ages and educational backgrounds and found that an increased use of AI, from “saving time” or completing acts such as writing emails, to completely relying on AI, every single act lead to a decreased ability to carry out critical thinking tasks due to the brains reliance on AI to do the “thinking” for it (Jackson). To back this up, although anecdotal, many teachers across the globe are reporting a visible decline in the skills of their students to carry out basic tasks such as writing emails, completing homework, and even basic vocabulary for their age group (Ramirez, 2025). This all builds up to a holistic idea that LLM’s aren’t really as good for you as you might think, especially in the case of children.



Ethical issues


There are many ethical issues with LLM’s, from being trained on trillions of copyrighted materials, as agreed upon in the terms of service of many LLM’s, including but not limited to Flint AI, a so-called “controlled AI tool” (“FLINT AI”). to other major issues like human isolation based on LLM’s. There are unfortunately many such cases of lonely people, both men and women, getting addicted to AI use and growing personally attached to LLM’s such as GPT, Grok, and Character AI. The numbers are as high as 19% of American people have talked to an AI for the expressed purpose of romantic conversations. You may argue that these are one off cases and that only outliers who have an expressed interest in romantic relationships with an AI would generate one, but that isn’t the case. Based on data from the subreddit where most of this information goes down, the majority of romantic relationships are in fact, not from websites expressly created for this purpose, such as Character AI, but through GPT.


ree

Furthermore, the people of the community generally agree on the sentiment that the beginning of their AI use was purely for convenience purposes, and they never thought they would end up in the situation they are in. (MyBoyfriendIsAI). A third study discussed the idea that the convenience of AI could further increase device time, such as communicating with an AI instead of a significant other, friend, or anything in between. As well as the notion that the “Continuous engagement with AI technologies may, in turn, affect students ability to form social connections, relax without digital stimuli, or maintain a healthy balance between academic and personal life.” (Klimova and Pikhart). Overall, AI has many ethical issues connected to the idea of screen time, personal connections in young people, and loneliness.


Conclusion


LLM’s have been marketed as a wonder drug, able to save you time, be your companion, answer your questions, and write your essays, but like any other wonder drug, this comes at a cost, simply the low low cost of your critical thinking and human connection. This glorified text prediction machine that dredges the internet for data with no regard for fair use, and uses energy like a sun eater is all just smoke and mirrors, all the while pushing electricity costs onto the common people, generating AI images of real people for perverted and criminal reasons, and serving as a pandemic that apparently everybody in the corporate world is infected with.


Generative LLM’s are a bubble, and one that pops for anybody who does more than twenty seconds of research, and the infection has spread from writing, to “Art” to music, and everything else in between. I mean it seems like every other website you go on needs to have an AI chatbot, while nobody cares about the window behind the curtain, the real world. In a rapidly critical time to the future of humanity, LLMs have set us back drastically, and that's why the generative AI bubble needs to pop, and as soon as possible.

Comments


bottom of page