In partnership with Kainos, Tortoise hosted a roundtable event that addressed the complexities of building trust in Artificial Intelligence (AI).
The topic for the event was centred around aÂ recently published report that Tortoise and Kainos collaborated on, with the help of over 20 leading experts in ethical and responsible AI: a piece of work that explores how the misuse of artificial intelligence can be addressed to build a future of trust for everyone.
For this event, we invited some of the reportsâ contributing experts to help us unpack some of these challenges.
What? Beyond the three hypotheses that our report puts forward, education was discussed as a key component to creating trust: Dr. David Leslie, pointed out that we need upskilling in terms of understanding what goes on under the hood, but also need some ethical vocabularies to evaluate impacts of AI. A âgroundswell of educationâ is needed, said Tim Gordon.
AI Network member, Natalie Lafferty, noted that in education spaces we really need to understand the implications of all this given the potential harms from misuse.
We also need something to stimulate learning in the long term says Nell Watson, Chair of ECPAIS Transparency Expert Focus Group at the IEEE â a thought that resonated with suggestions in the chat that called for more imaginative ways to provide education about AI; might we see a gamification of these conversations to help young people learn?
When it comes to ethical best practice, many of the members and experts on the call felt we are still in murky waters: standards, though beginning to emerge, are urgently needed to help solidify trustworthiness across the wide range of AI practitioners and products.
Who? The AI ethicist, a new professional role that may help to improve trust in those who develop AI systems, was called into question by Dr. Emma Ruttkamp-Bloem. She suggested that we really need to ask who this person is? But professionalisation is not the only factor; if we are looking to measure impacts, David felt that we need to prioritise stakeholders who are impacted by AI usage.
Many existing efforts to establish guard rails around AI development with ethics are not inclusive enough: âthe table belongs to everybody from the beginningâ, said Emma. This raised the question of whether the current conversation is dominated by Western perspectives â a view that resonated with many audience members, including Abeba Birhane who noted that Africans are notably not present in many of these kinds of conversation.
Why? Corporate carrot and stick; Tim Gordon and Nell both felt there is a) a business incentive and b) a regulatory hammer that will push corporations to be proactive about ethical AI practices.
The scene is also being set for heightened public awareness about AI: as artificial intelligence becomes increasingly powerful and embedded into our everyday lives, we may see a moment of sustained moral panic, said Nell. But Dr. David Leslie, wants us to be cautious though about how we approach the future of technology: letâs not be too hasty to anthropomorphise it.
What next? For true democratic governance of AI, we need to step back and think about longer term patterns that are structural, says David.
Citizen consultations and understanding how actual users are impacted by AI technologies emerged as a possible route to enable a greater well-placed trust across the board.
Illustration: Laurie Avon for Tortoise
editor and invited experts
Luke GbedemahReporter, Tortoise
Dr David LeslieDirector of Ethics and Responsible Innovation Research, The Alan Turing Institute
Nell WatsonChair of ECPAIS Transparency Expert Focus Group, IEEE
Peter CampbellData & AI Practice Director, Kainos
Tim GordonPartner, Best Practice AI