Radius
Oct 12, 2017

Scientists, Philosophers and Designers Tackle Artificial Intelligence

The best of Trinity's staff were on offer to discuss the future of artificial intelligence in the Long Room Hub this evening.

Jack SchofieldScience and Research Correspondent
blank
Sinéad Baker for The University Times

Tonight saw the first of the Trinity Long Room Hub’s “Behind The Headlines” event for the academic year, with a panel discussion on the ethical and human impact of artificial intelligence: “Our Digital World: Who is Serving Whom?”

The event, chaired by Prof Jane Ohlmeyer, Professor of Modern History and Director of the Trinity Long Room Hub, brought together a panel of experts from a range of backgrounds and disciplines to investigate the moral, societal and economic implications of artificial intelligence and robotic process automation.

The first speaker of the night was the Director of the ADAPT Centre for Digital Content Technology, Prof Vincent Wade, from Trinity’s School of Computer Science and Statistics.

ADVERTISEMENT

Wade’s presentation dealt with some of the risks and rewards associated with the rapid acceleration into artificial intelligence in the 1980s and 1990s. He began by discussing how it has overcome the challenges that once held the technology back. Some of the technical basis of the remarkable pace of development allowing artificial intelligence to perceive, reason, learn and begin to intervene and change the world around was discussed. These recent improvements were attributed to changes in the way that artificial intelligence is created.

Wade explained some of the technical details behind this improvement, discussing layers of networks in artificial intelligence systems that mimic the brain with methods such as Deep Learning and Machine Learning. While he acknowledged some of the risks associated with this technology, Wade stressed that responsible “cultivation of data” would allow society to unlock its “phenomenal potential”. Careful stewardship of the technology would allow society to “make artificial intelligence work for us and not us work for it”.

Following Wade was Linda Hogan, Professor of Ecumenics and former Vice-Provost of Trinity, who explored some of the ethical questions surrounding artificial intelligence. Hogan spoke about the values embedded in these systems by the programmers who create them, the magnification of bias and discrimination and the risks of moral decision making by algorithms. Despite these problems, she was keen to point out the opportunities provided by this revolutionary technology.

Our ability to avoid such issues with careful design of the technology was made clear, as “the only decisions AI makes are those that it is designed to make”. The positive way in which society has dealt with dramatic technological advances in the past, in the fields of genomic and nuclear science was put forward as a rough template for how people should navigate the ethics of intelligent machines.

Lorna Ross, Group Director at Fjord Dublin, Accenture Interactive, brought the designer’s perspective to the discussion. Her background, working as a designer of everything from underwear to military equipment has allowed her to explore the “intersection of humans, technology and science”. She spoke about the difference between a user knowing how to operate technology and actually understanding how it works. The opaque nature of artificial intelligence makes the fact it can see, hear and recognise us unsettling to users, suggested Ross. It will be possible to overcome these design challenges by applying the principle that “we don’t design for how we are, but for what we want to be” said Ross, who was optimistic about the possibilities that artificial intelligence offer.

The final speaker of the night was Professor in Media Engineering and Oscar winner, Anil Kokaram, who discussed the ways in which visual technology can distort the truth. His presentation began with the admission that machines are past the point of being able to trick us, that faking images has become incredibly easy with technology. Artificial intelligence systems are capable of this he said, because “manipulating pictures is just about manipulating numbers” and while humans are bad at this manipulation, machines are extraordinarily proficient.

Kokaram demonstrated some of the visual trickery with spectacular and seamless visual effects that he was worked on in the past. The audience was given advice on “fake spotting”, with Kokaram going through viral internet videos and pictures and explaining how they are doctored. Machine learning, he said, has changed everything, making the manipulation images simpler, cheaper and more intuitive than ever before.

With the four speakers finished, the discussion was opened up to questions from the audience. Audience members wondered about “out-of-control, Frankenstein artificial intelligences”, control over who we allow “inject their values into the system”, how we react to “algorithm-generated art and music” and the threat that technology poses to many modern jobs. The exchange of ideas between the panellists and audience was an ideal conclusion to what was a stimulating event on a fascinating topic.

The Trinity Long Room Hub “Behind The Headlines” series will continue on Monday, November 6th with an event discussing the influence of journalists and lawmakers on freedom of speech.

Sign Up to Our Weekly Newsletters

Get The University Times into your inbox twice a week.