Generative artificial intelligence, such as ChatGPT, has become a thorn in the side of educational institutions recently. The rise of large language models and the ease of access to these LLMs has led to a massive uptick in students using them, primarily ChatGPT, to do their assignments for them. A Pew Research Center study performed in January of 2025 found that over a quarter of teens (defined as ages 13-17) use AI for their schoolwork, which is twice the rate of students admitting the same in 2023. Even then, that metric doesn’t account for survey respondents who chose not to report their AI usage or those who never chose to take the survey in spite of their AI usage. This has forced educational institutions to grapple with what constitutes adopting a new technology and what constitutes cheating. I think the answer is pretty clear: by using an LLM to complete a school assignment, the student skips all the critical thinking and problem-solving skills that assignment aims to build. For that reason, I believe that using AI to do your assignment for you should qualify as cheating as it is equivalent to plagiarizing someone else’s work.
To combat unwanted AI usage, institutions have begun implementing policies to govern student AI use, including what constitutes academic dishonesty. While AI policies vary depending on the institution, the policies commonly focus on ethical usage, plagiarism and disclosure. These policies are constantly evolving due to the ongoing development of AI, and how it is constantly changing and becoming harder to detect. Even just last year, it was easy to detect when a piece of writing was made using AI like ChatGPT because of how soulless the prose and diction were. But now professors and peer reviewers are struggling to identify when a piece was written using a LLM, causing educators at many institutions to be stuck reworking assignments and moving to physical copies to prevent students from using ChatGPT. Alternatively, at many institutions, including Allegheny, some educators have decided to just require sourcing when AI is used. At Allegheny, the method for citing generative AI is to disclose which LLM you used followed by the prompt you asked it, along with the date AI was used. While this method of sourcing at least has people disclose that AI was used, it provides no information on what the response was or how the response was integrated into the writing.
This semester, the Biology department has implemented a new policy on generative AI in an attempt to address the issue. Under this new policy, every assignment is labeled by the instructor on a scale from one to five, with each increasing level allowing for more use of AI. Level one states, “The assignment is to be completed entirely without AI assistance,” while level five states, “AI should be used to creatively solve the task given by your instructor, potentially co-designing new approaches with your instructor.” This new policy doesn’t address any of the problems that AI creates, and instead attempts to teach students to use AI correctly.

But there is no correct way to use AI, because all usage is inherently unethical. One major issue is the environmental effects of AI usage. AI servers are housed in massive data centers, which are large windowless buildings that do nothing but house thousands of servers for major websites and companies, and recently AI servers. These servers, like all computers, must be cooled to prevent overheating, so these data centers pump in tens of thousands to millions of gallons of freshwater every day to cool their servers. Data centers are typically built near small rural towns in dry or temperate regions, where the water demand for these centers often exceeds what the town normally uses. The high water consumption affects these towns, harming historically water stressed regions across the world. These data centers are causing countless issues for people living in communities adjacent to these centers by siphoning necessary drinking water from these already vulnerable communities. People are stuck being unable to drink the water in their houses or having to buy water to cook with or drink.
Another ethical consideration is the lack of data safety now that LLMs are becoming more popular. Fun fact: AI is not even artificial intelligence. LLMs are just chatbots that make educated guesses based on data trends and internet searches. To collect data for these chatbots, companies such as OpenAI, the creators of ChatGPT, will scrape data off the internet to train their LLMs. This data includes everything from scientific articles to social media to websites such as LinkedIn. One major cause for concern is the lack of consent for this data collection. Many websites have opted their users into data collection without their consent, causing major cybersecurity issues. This collection of data has caused a lot of backlash due to people’s concerns over their internet privacy. This issue of privacy is affecting artists the most, as their works are being stolen and used to teach the same AIs that are taking their business away.
With professors now being responsible for how much, if any, AI is allowed on each assignment, there is now a huge discrepancy in AI usage between professors. Some professors do not allow AI to be used on most assignments, while other professors encourage AI to be used for most assignments. Using AI to do your schoolwork for you is a waste of your time and money, especially in the biology and chemistry disciplines. When asking a LLM to explain a topic and include sources, it tends to hallucinate and create fake sources and publishers using real scientists’ names. These hallucinations tend to trip people up as they sound like real papers and the authors’ names are real, but when attempting to use these sources, they are found to be nonexistent. To get a LLM like ChatGPT to give you real sources, you need to jump through endless hoops, and with the time sink, you might as well have just done the research yourself.
Additionally, while there are some merits to AI checking work, mainly in the computer science field, AI is wretched when it comes to biological and chemical processes. One hallmark of chemistry is that every rule has a million and one exceptions, and AI just doesn’t understand that. Because of the nature of biology research and processes, I have never seen a completely correct response from a LLM when asked about a specific biological process. One major component of this is that there are many gaps in knowledge of how biological or chemical processes work. When asking an LLM about a process with a substantial gap in knowledge involved, it will often make up processes and sources in an attempt to bridge the gap. While I can try to understand the thought process behind encouraging students to use AI to create outlines or examine drafts, it is still not even close to a useful enough tool to excuse the effects it has on the environment. When an LLM scalps billions of statements without context from the internet, it gets even more wrong and meaningless.
While I can understand the arguments that using AI, even when it produces wrong information, can build students’ critical thinking skills, I think those arguments are wrong. Stealing gallons of freshwater from vulnerable populations so you can correct wrong information you could have written yourself is not a sufficient argument in favor of AI in schools. While the development of critical problem-solving skills is important, there are so many better ways of developing those skills that don’t rely on AI use. Building skills like these manually through assignments rather than taking an AI shortcut is much more effective and does not contribute to mass environmental devastation. In terms of biology, these skills can be built in a multitude of ways, such as examining a research paper and providing critiques alongside an analysis of the paper. Methods like that are so much more effective at developing those skills in students than correcting AI. Overall, I find the new policy the biology department has introduced to be very disappointing, and I want to see more policy adaptations in the future that aim to steer away from AI, rather than try to embrace it. If everyone just deals with AI’s presence rather than resist it, it will continue to grow and destroy our planet. It is the job of educators and the younger generations to prevent AI from taking over our lives and our planet.