In Focus
Oct 14, 2023

Fostering Media Literacy in the Age of Artificial Intelligence

Should College be educating its students on the uses and abuses of AI, rather than banning it completely?

Wynslow WilmotFeatures Editor
blank
Eleanor O'Mahony for The University Times

With the rise of Artificial Intelligence (AI) in the past years, universities have taken a number of different stances on regulation of the tool. As a student at Trinity, I have been continuously told that using generative AI on assignments is unacceptable, without any further discussion upon the ethics of using or abusing AI tools such as ChatGPT. Professors have essentially shoved any conversation having to do with it under the rug, leaving no room for education on its potential uses. This made me wonder if there were ways to use this tool ethically and effectively, and if these could be taught to students. If credible sources are beginning to use AI, how will students be able to notice this and distinguish accurate information from inaccurate information? This led me to posit whether the AI understood its own flaws and limitations in presenting information. Maybe it even had suggestions on how to bypass those flaws and make use of its strengths. So, I sat down and interviewed ChatGPT. 

The need to educate students in media literacy, particularly as the prevalence of AI rises, is at an all-time high. Media literacy refers to the ability to critically analyse information and stories presented within mass media to determine accuracy and credibility. This now includes ChatGPT and other generative artificial intelligence platforms. As the popularity of these platforms continues to grow, reputable sources have begun to utilise them. In May of this year, Amnesty International made use of AI generated images, posting to their social media photos depicting a woman at a protest. Educators need to update their approach to teaching media literacy  to include artificial intelligence. 

Even ChatGPT argues the importance of this: “AI is fundamentally transforming how media is created, distributed, and consumed. Students need to grasp these changes to navigate the evolving media environment effectively.” If students are not being taught about AI, they are being left behind. Whether you like it or not, this technology is becoming increasingly ubiquitous across all industries. By taking the stance that usage of AI is off the table, as well as failing to educate on ethical and responsible ways to use it, Trinity is severely handicapping students. 

ADVERTISEMENT

There are ways to utilise this tool without risking plagiarism, yet these methods have been completely ignored by professors. How are students expected to discern fact from fiction if they are simply not taught about the signposts of false information, especially that which comes from artificial intelligence? ChatGPT seemingly agrees: “as AI-generated content becomes more prevalent, students must be equipped to differentiate between content produced by humans and that generated by AI. They should also understand the implications this has on the credibility and reliability of information they encounter.” By simply banning the use of artificial intelligence platforms, not only has Trinity made it significantly harder for students to learn how to use this technology properly and ethically, but also to spot when other reputable sources have used it, and to determine whether the information being procured is accurate. Without allowing room for education about AI, Trinity hinders students ability to learn valuable skills and tools. 

While Artificial Intelligence is being used more and more as a tool to produce the information that is circulating and shaping our understanding of the world, we must also understand that the way in which it does this could lead to sharing harmful misinformation. ChatGPT gathers information from all corners of the internet, using anything available to it and prioritising the most popular sites. This produces a mess of information, which cannot yet differentiate between fabricated information and reality-based information. While AI is becoming more and more powerful, it still has plenty of flaws. “AI’s potential to inadvertently perpetuate bias and be misused to manipulate information, students need the skills to recognize these issues and develop strategies to detect and address them. Ethical considerations, such as the responsible use of AI in journalism and content creation, are also paramount.” We must be able to hold these two truths: that AI can be a brilliant tool to make use of, so long as one is educated on how to recognise and discern real from not real as the use of AI becomes more common. Without this education, misinformation will be spread significantly quicker, not just by AI but also by students. 

Whilst students using Artificial Intelligence platforms to plagiarise assignments and content is an understandable concern, there are deeper and more important questions to be asked when discussing AI. These questions are less about plagiarism and more about how people are going to discern and identify bias within the media being consumed. How will you be able to tell if the information being shared with you is creating an accurate depiction of what is actually occurring? These worries are reminiscent of when the internet was first becoming commonly used, and we are still grappling with them today. Yet, had educators chosen not to teach pupils about the internet, we would surely be far worse off.

Sign Up to Our Weekly Newsletters

Get The University Times into your inbox twice a week.