Artificial intelligence has been the theme of the past few weeks at Trinity, albeit in different ways. From high-level policy discussions to hands-on artistic experiments, the potential and pitfalls of AI have been explored through both a scientific and cultural lens.
On Friday, 26th September, researchers from the ADAPT Research Ireland Centre for AI-Driven Digital Content Technology led the #ExploreAI expo, which looked at artificial intelligence through an artistic and human-centred perspective. The event, part of Trinity’s START (Start Talking About Research Today) Festival for European Researchers’ Night, invited visitors to think critically about AI’s role in our lives—its creativity, its blind spots, and its growing influence over how we experience the world.
The Douglas Hyde Gallery was transformed into an interactive AI playground, where art met algorithms and visitors were encouraged to question what technology really “sees” when it looks at us. One exhibit let people track their digital day, from scrolling through social media and checking sleep data to using Google Maps, and revealed just how much of that activity is quietly monitored. Data brokers, the shadowy middlemen of the internet economy, collect astonishingly granular information: how long you hover over a Netflix title before giving up, the comments you start typing but never post, or your patterns of rage-quitting a game before inevitably returning to it. This behavioural data, when pieced together, paints a disturbingly intimate portrait of who you are—what you like, what frustrates you, and what might persuade you. It’s then sold to advertisers, political campaigners, and other interested parties. One particularly striking statistic shared at the event estimated that by the time a child in the United States turns 13, over 72 million pieces of personal data will have been collected about them.
The showcase also tested participants’ confidence in their ability to recognise AI-generated content. Visitors were asked to decide whether short stories, poems, and translated texts were written by humans or by machines. It turned out that instincts weren’t always reliable, many confidently wrong guesses revealed just how blurred the line between human and machine creativity has become. The activity echoed a wider cultural unease: as AI grows more capable of mimicking our styles and emotions, what does it mean to be able to tell the difference? And does it even matter if we can’t?
Still, #ExploreAI was far from alarmist. Many exhibits highlighted AI’s promise as much as its pitfalls. Visitors could explore a high-resolution digital twin of Dublin’s Docklands, showing how AI and data are shaping urban planning and sustainable development. A talk from Tang Ngo from the ADAPT centre demonstrated how AI-assisted technologies could help medical interpreter robots learn from human interactions. The overarching message was clear: AI is not going away; the question is no longer whether we should use it, but how best we can harness it for good.
That question was particularly apt in light of the National AI Leadership Forum, held the previous day at William Fry’s Dublin office. This Forum, also hosted by ADAPT, in partnership with the Insight Centre for Data Analytics and William Fry, brought together more than 100 senior figures from government, academia, industry, regulators, and civil society. Their aim was to build consensus on what Ireland’s AI future should look like and how to ensure that the country’s approach remains both innovative and ethical.
This gathering followed two earlier high-level roundtables in March and May, where participants agreed that Ireland needs a clear, coordinated space to discuss AI’s opportunities and risks. The appetite for leadership is palpable. Ireland’s Government has already announced the creation of a new National AI Office to oversee how the country implements the EU AI Act, a sweeping new law designed to regulate the technology across Europe. The Forum’s discussions will inform an AI Leadership Charter and Action Plan, due later this month, which will guide Ireland’s contribution to the refreshed National Digital Strategy.
The conversations at the Forum revolved around five national priorities that participants identified as critical to Ireland’s competitiveness and credibility in the AI era. The first is global leadership. Ireland will assume the Presidency of the Council of the EU in 2026, a symbolic and practical opportunity to showcase the country as a trusted, forward-thinking hub for responsible innovation. To do this, Ireland must show that it can combine world-class research and ethical oversight with agile, transparent regulation.
The second priority is building Ireland’s AI workforce. This doesn’t just mean training data scientists but rather developing a national framework for AI literacy that gives citizens and workers alike the knowledge and confidence to use AI safely and effectively. From engineers to teachers, policymakers to healthcare staff, AI literacy will need to become as fundamental as digital literacy is today.
The third is public trust in government services. The Forum discussed the potential of flagship AI pilot projects within the public sector, initiatives that could use AI to improve transparency, efficiency, and participation, while keeping citizens directly involved in the process. Public trust is a fragile commodity, and demonstrating that AI can deliver tangible benefits, rather than merely efficiencies, will be vital if Ireland is to maintain social confidence in its rollout.
Closely tied to this is the fourth priority: managing AI risks and social impact. Participants called for independent and transparent regulatory testbeds, anchored in Ireland’s research ecosystem, to assess new technologies before they reach the public. These testbeds would feed into a proposed National AI Observatory, an entity designed to monitor the effects of AI across sectors and track public attitudes.
Finally, the Forum addressed the need to balance governance with innovation. Overly rigid regulation risks stifling creativity, while too much flexibility invites abuse. The goal is to create regulatory “learning spaces” or sandboxes, where researchers, small businesses, and public bodies can test AI systems in controlled environments, with shared oversight. This model would ensure accountability without slowing innovation, an approach well-suited to Ireland’s reputation as a regulated tech hub.
Beyond the Forum, momentum around Ireland’s AI strategy has only grown. In mid-September, the Government designated fifteen national competent authorities responsible for enforcing the EU AI Act, including the Data Protection Commission, the Central Bank, and Coimisiún na Meán. These regulators will work under the coordination of the forthcoming National AI Office, expected to be operational by August 2026. Rather than centralising power in one agency, Ireland will adopt a distributed model where sector-specific regulators will retain expertise while the new Office acts as a unifying node.
This proactivity positions Ireland among the first in Europe in implementing the AI Act. It’s indicative of their ambition not just to comply, but to lead and to show that regulation and innovation can coexist. The combination of academic leadership, legal insight, and civic dialogue that characterised both the Forum and the #ExploreAI expo may prove essential to making that balance work.
Across both events, what stood out most was not the technology itself, but rather the tone of cautious optimism grounded in human values. Whether through art, policy, or debate, the underlying question remained the same—how do we build a future where AI serves people, not the other way around? If the conversations of late September are any indication, Trinity intends to keep asking that question, and Ireland is determined to help answer it.