Two years after the emergence of ChatGPT, Trinity College Dublin’s (TCD) policies on the use of generative Artificial Intelligence (Gen AI) in teaching, learning, and assessment are still under development, with an official statement expected to be issued in the coming weeks.
The Centre for Academic Practice at Trinity Teaching & Learning published a resource guide on Generative Artificial Intelligence and Academic Integrity. In an effort to avoid different policies across the college, College has asked schools not to create their own policies on the matter. Instead, they are to use the college’s statement as a foundation for local practice and adapt the handbooks for specific approaches.
In the School of Medicine, students’ Yearly Study Guides include “Academic Integrity and Referencing Guidance,” with an excerpt from the Calendar Statement of Academic Integrity. AI use is addressed under “Academic Misconduct,” which states that plagiarism includes “submitting work which has been created using artificial intelligence tools, where this has not been expressly permitted.”
Similarly, the School of Histories and Humanities has stated that it will treat the presence of AI-generated work as a breach of TCD’s guidelines on academic integrity. “Any use of AI-generated material in an essay or exam answer will render that work inadmissible for assessment and will be subject to the sanctions outlined in the College Calendar.”
Eoghan Gilroy, the Education Officer of Trinity College Dublin Student Union (TCDSU), finds the lack of guidance from the college on Generative AI policies “abysmal”. “This statement provides little to no guidance for students who are very often put in a difficult position when deciding whether or not they can or should use Gen AI,” he said.
With pros and cons varying by academic subject and diverse opinions on Generative AI softwares across the board, many faculties and professors have taken measures to mitigate and control its use such as shifting the way exams are delivered and asking students to be transparent about usage.
For example, the School of Psychology and the School of Business have created systems where students self-report AI use in an academic integrity declaration. According to Clare Kelly, an associate professor in the School of Psychology and Department of Psychiatry at the School of Medicine and a principal investigator at Trinity College Institute of Neuroscience, few students have declared use. However, she noted surveys suggest that there is substantial use.
“It’s not clear to us whether our students are an exception or whether the use has not been declared, which may reflect uncertainty on behalf of students about the impact of that declaration,” Clare said.
Gilroy explained that the lack of policies in school have generated fear among students about using AI: “There is a welcomed push for greater harmonisation within Faculties, as this previous laissez faire attitude has led to significant inconsistencies in how Schools have handled AI, causing confusion and unfair punishments, as outlined.”
A Study.com student survey found that 89% of respondents used ChatGPT to help with a homework assignment, 48% admitted to using it for an at-home test or quiz, 53% had it write an essay and 22% had it write an outline for a paper.
Kevin Kelly, an associate professor of mechanical, manufacturing and biomedical engineering, said the use of Gen AI is widespread in his modules. “If I’m sitting with a student and they have a laptop opener and walking around a lab and people are working on computers, it’s pretty rare there isn’t a tab open somewhere in the computer that has Gen AI others in some shape or form.”
However, Khurshid Ahmad, a professor of computer science, said he trusts his students without reservation. “In 54 years of teaching, learning and research at university level my experience of students is that they are honest, keen to learn, and have helped me to learn a great deal. Like in any walk of life, a few might take short cuts, even cheat sometimes, that does not mean the whole lot is to be blamed and monitored.”
Kevin considers Gen AI to be a “disruptive technology” in both positive and negative ways. On one hand, he sees it as an impressive productivity tool for tasks like answering questions that may require shuffling through several PDFs and writing code. On the other hand, he sees this technology putting the validation of capability at risk.
“I am training somebody to perform a particular role…Would you be comfortable flying in a plane that had been designed by an engineer who had used Gen AI to pass their way through university? These are the challenges we have in terms of the consequences of certified people with a skill that they may not actually have,” he explained.
Clare has “enormous ethical concerns” about Gen AI that span beyond education and lead her to never use the software, like how tools such as ChatGPT are “trained on data scrape from the internet, proprietary data, copyrighted data, data that was not given with permission”. Gen AI also presents environmental concerns as data centres in Ireland now consume almost over a fifth of the European Union’s electricity, according to Barron’s. This demand is increasing as Gen AI proliferates.
“I feel that what is required is a much more critical conversation about these tools — about their ethical implications, about their climate and environmental implications, about their justice implications, than we have had to date,” she explained.
Raidió Teilifís Éireann surveyed 450 Gen Z students and found that “tech-savvy Irish students who recognise the power of generative AI may know these tools are unethical but use them anyway.”
Gilroy believes there are several ways students can use Gen AI creatively, while still being within the guidelines of academic integrity. Examples include helping develop ideas for assignments, providing an outline to a topic, summarising articles to help students better understand what is being discussed and helping students with syntax and grammar.
“Use of such tools, with due care and attention, can only improve the student experience in a classroom. Students engage with programs differently – showing scepticism as they should, and compare and contrast with materials like lecture notes and textbooks,” Ahmad said.
While it can be difficult at times to detect AI usage in assignments, Kevin believes educators have a big responsibility to ensure students are not using AI in ways that taint academic integrity.
Though engineering isn’t a discipline that would generally be assessed by essay type questions, some of his exams are delivered online and monitored in campus computer labs computers are locked down to prevent students from accessing tools. He has also noticed a shift back to traditional handwritten exams and oral examinations. Clare’s department states that oral exams can be added to any module if it is felt necessary.
Gilroy cited University College Dublin’s (UCD) College of Arts and Humanities traffic light system, which marks individual assignments as red, amber or green to “provide clear guidelines to students about whether or not they are permitted to use generative AI in their work.”
“This is so important for students to know, and I would love to see this approach brought in to some schools in Trinity to help students become more aware of the benefits of Gen AI, but also how damaging and exploitative of a technology it is, particularly in terms of the environment and on countries in the Global South respectively,” Gilroy said.