By Gionna Bean, Opinion Editor
Student uses ChatGPT – Photo by Gionna Bean
Artificial Intelligence (AI), once seen as a futuristic concept and a source of fear for many, has now become part of the classroom. Opinions on the extent of its role, however, vary.
At Aquinas College, the decision on how AI should be used in the classroom is left to the discretion of individual professors, meaning that students will encounter varying policies on AI usage depending on the course. These policies are clearly outlined in each course’s syllabus, making it important for students to be aware of the specific guidelines for every class they take.
The current General AI Usage Policy at Aquinas College breaks down AI usage into three distinct levels:
- Level one: No AI usage is allowed in any form for the course. This includes tools such as Grammarly, Chegg, ChatGPT, Bard, and others.
- Level two: Some AI usage is allowed, but only in specific ways as outlined by the course instructor. Common examples include using AI for brainstorming, pre-work, editing, or other designated tasks.
- Level three: Students are encouraged to utilize AI throughout the course, provided it is appropriately cited.
Associate Professor of Communications Dr. Kyle Hull has chosen level two for his classes, explaining that students should learn to use AI as a tool to assist their learning without becoming overly reliant on it. “I understand that my students are likely to use AI, and I won’t deny them the benefits it offers,” Hull said. “However, I ask that they use it for pre-work (brainstorming) and post-work (editing) only. This way, the content remains primarily their own creation.” Hull’s stance underscores his belief that AI should be a supplementary tool rather than a replacement for critical thinking and personal effort in classes.
Adjunct Professor of Psychology Carlie Zervan added to this point by stressing that transparency is important in AI use in the classroom. “In my courses, I encourage students to view AI as one tool in their academic toolkit while maintaining awareness of its capabilities and limitations,” Zervan said.
One such limitation is AI hallucinations, which occur when the system identifies patterns that humans do not, resulting in nonsensical outputs. Professor of Psychology Dr. Daniel Cruikshanks shared an example of such a hallucination in which he asked ChatGPT to write a paper, only to find that most of the references provided were from fictitious journals.
Despite these challenges, Cruikshanks believes that large language model (LLM) AI has great potential. “I believe LLM Generative AI can be a valuable tool for various tasks, and we’ll eventually reach a point where it becomes reliable,” Cruikshanks said.
Alongside course-specific policies, the AQ Writing Center has developed their own guidelines for using AI. As writing consultants assist students, they too have had to navigate questions about whether AI should play a role in the writing process. The Writing Center was the first group on campus to release a publicly available document addressing AI use.
“We’ve updated it slightly since its initial version, but the core remains unchanged: when consulting with students on writing assignments, we defer to the professor’s specific policies,” Julie Bevins, Aquinas Writing Center Coordinator said.
Student works on laptop – Photo by Gionna Bean
As AI technology continues to evolve, so too will the policies surrounding its use. Aquinas College’s flexible approach aims to encourage students to take advantage of the tool’s capabilities while maintaining academic integrity and originality in their work.
The emphasis on clear communication between professors, students and writing consultants fosters an environment where AI can be used responsibly—enriching the learning experience rather than diminishing it.




