Hello. It looks like you�re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

#TheAIRevolution

thinkin

Making sense of consciousness, with Luke Gbedemah

Consciousness is a tricky subject. Academics, philosophers, artists and mathematicians have grappled with its definition for centuries. There’s something mysterious about our perception of the world, and the way it gives rise to the feeling of conscious being. Something mysterious that makes us who we are.With the increasing sophistication of artificial intelligence models — like Google’s LaMDA — questions about the nature of consciousness are surfacing. Can a programme be sentient? Do other animals possess a form of consciousness similar to ours? Do conscious things deserve particular rights? The brain and the body, the nervous system and the senses, all seem to play a role. What on earth is going on in there? editor and invited experts Luke GbedemahData Reporter Anil SethProfessor of Cognitive and Computational Neuroscience, University of Sussex; Co-Director of the Sackler Centre for Consciousness; Author of ‘Being You: a new science of consciousness’

thinkin

Building trust: how do we ensure AI is deployed responsibly?

In partnership with Kainos, Tortoise hosted a roundtable event that addressed the complexities of building trust in Artificial Intelligence (AI). The topic for the event was centred around a recently published report that Tortoise and Kainos collaborated on, with the help of over 20 leading experts in ethical and responsible AI: a piece of work that explores how the misuse of artificial intelligence can be addressed to build a future of trust for everyone. For this event, we invited some of the reports’ contributing experts to help us unpack some of these challenges. What? Beyond the three hypotheses that our report puts forward, education was discussed as a key component to creating trust: Dr. David Leslie, pointed out that we need upskilling in terms of understanding what goes on under the hood, but also need some ethical vocabularies to evaluate impacts of AI. A “groundswell of education” is needed, said Tim Gordon. AI Network member, Natalie Lafferty, noted that in education spaces we really need to understand the implications of all this given the potential harms from misuse. We also need something to stimulate learning in the long term says Nell Watson, Chair of ECPAIS Transparency Expert Focus Group at the IEEE – a thought that resonated with suggestions in the chat that called for more imaginative ways to provide education about AI; might we see a gamification of these conversations to help young people learn? When it comes to ethical best practice, many of the members and experts on the call felt we are still in murky waters: standards, though beginning to emerge, are urgently needed to help solidify trustworthiness across the wide range of AI practitioners and products. Who? The AI ethicist, a new professional role that may help to improve trust in those who develop AI systems, was called into question by Dr. Emma Ruttkamp-Bloem. She suggested that we really need to ask who this person is? But professionalisation is not the only factor; if we are looking to measure impacts, David felt that we need to prioritise stakeholders who are impacted by AI usage. Many existing efforts to establish guard rails around AI development with ethics are not inclusive enough: “the table belongs to everybody from the beginning”, said Emma. This raised the question of whether the current conversation is dominated by Western perspectives – a view that resonated with many audience members, including Abeba Birhane who noted that Africans are notably not present in many of these kinds of conversation. Why? Corporate carrot and stick; Tim Gordon and Nell both felt there is a) a business incentive and b) a regulatory hammer that will push corporations to be proactive about ethical AI practices. The scene is also being set for heightened public awareness about AI: as artificial intelligence becomes increasingly powerful and embedded into our everyday lives, we may see a moment of sustained moral panic, said Nell. But Dr. David Leslie, wants us to be cautious though about how we approach the future of technology: let’s not be too hasty to anthropomorphise it. What next? For true democratic governance of AI, we need to step back and think about longer term patterns that are structural, says David. Citizen consultations and understanding how actual users are impacted by AI technologies emerged as a possible route to enable a greater well-placed trust across the board.   Illustration: Laurie Avon for Tortoise editor and invited experts Luke GbedemahReporter, Tortoise Dr David LeslieDirector of Ethics and Responsible Innovation Research, The Alan Turing Institute Nell WatsonChair of ECPAIS Transparency Expert Focus Group, IEEE Peter CampbellData & AI Practice Director, Kainos Tim GordonPartner, Best Practice AI

thinkin

How can responsible businesses ensure they approach AI the right way?

This is a digital-only ThinkIn.Businesses face a number of obstacles when it comes to adopting or absorbing more artificial intelligence. From the availability of talent to making a financial case for investment; these challenges demand unique and sector-specific solutions. Almost all companies have an incentive to adopt, and they do face some common challenges when it comes to doing so responsibly; not least creating trust, appropriate governance and navigating regulation. How can businesses ensure that they approach adoption in a way that protects stakeholders, observes regulation and puts value for their people at the centre of AI projects? editor and invited experts Alexi MostrousEditor, Tortoise Media Anand RaoGlobal Head of AI, PwC Caroline GorskiGroup Director of R² Data Labs, Rolls Royce

thinkin

What will life be like in 2041? with Kai-Fu Lee

This is a digital-only ThinkIn.How will AI change the world over the next 20 years? We’re told the technology has the power to transform humanity, but will it precipitate a cleaner, more connected utopia, or something altogether more sinister? Every part of our lives will be affected – how we communicate and learn, how we live, work and play. There are few people in the world who understand AI better than Kai-Fu Lee, one of the world’s leading computer scientists, former president of Google China and bestselling author of AI Superpowers. In his latest book AI 2041, Lee teams up with celebrated novelist Chen Qiufan to tell the stories and the science behind our AI-driven future.  editor James HardingCo-founder and Editor

thinkin

Will technology widen the power gap? In conversation with Azeem Azhar

This is a newsroom ThinkIn. In-person and digital-only tickets are available.Azeem Azhar, writer, entrepreneur and creator of the hit Exponential View newsletter and podcast – argues that accelerating technology risks leaving our social institutions behind, with devastating implications for our way of life. His newsletter, Exponential View is regarded as one of the best researched and most thought-provoking newsletters in tech. In his new book, Azhar draws on nearly three decades of conversations with the world’s leading thinkers, to outline models that explain the effects technology is having on society. New technology, he shows, is developing at an increasing, exponential rate. But human-built institutions – from our businesses to our political norms – can only ever adapt at a slower, incremental pace. The result is an ‘exponential gap’ – between the power of new technology and our ability to keep up.  Pre-order Azeem’s book Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It. editor Alexi MostrousInvestigations Editor

thinkin

The Tortoise Cyber Summit

At the Sir Harry Evans Summit for Investigative Journalism, Expedia Group chairman Barry Diller spoke of his optimism about AI technologies and, in particular, of his faith in one man – Sam Altman. “Here you’ve got somebody who is so purely motivated, who is not economically driven and who understands the dangers that are in front of AI,” Diller said. “Unfortunately, or fortunately… he is hardly the only player here.” Sam Altman is CEO of Open AI and sits on Expedia Group’s board. Under his stewardship, OpenAI launched ChatGPT, the chatbot that has made AI more accessible to the public than ever. It was no surprise, then, that when he gave evidence to members of the US Senate about the future of AI, everyone was keen to hear what he had to say. During Tuesday’s three-hour hearing, the OpenAI executive described the incredible potential of the technology that underpins his company, saying it could be used to cure cancer or combat climate change. But he also outlined the risks – including misinformation, fraud, job automation, the exploitation of women, and copyright infringement. “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman told the Senate. “We want to work with the government to prevent that from happening.” Altman proposed that lawmakers introduce regulation “urgently”, and set out a three-point plan for them to do so. He first suggested that the US government create a federal agency that can grant  – and revoke – licences to companies building AI models of a certain size.  He also called for a legally binding set of safety guidelines for AI models. Finally, Altman proposed that independent, third-party auditors review AI tools produced by companies. None of these are new ideas. As AI technologies have developed, governments have discussed ways to regulate it, but plans are only just starting to emerge.  The EU has already issued strict guidelines for the use of artificial intelligence – including large language models and generative AI – and the UK’s Competition and Markets Authority is planning a review of the AI market. The US Senate was broadly receptive to Sam Altman’s suggestions. They praised his commitment to safety – and the fact that, on the surface, he doesn’t seem to be motivated by profit, because he is not a majority shareholder in OpenAI.  That may be true, but Altman is, at his core, a businessman, and it is in his interest to make the company profitable. If he collaborates with US lawmakers, he will have a say in the regulations that ultimately govern OpenAI.

thinkin

Toxicity in tech: Why are Google’s leading AI ethics researchers being ‘silenced’?

At the Sir Harry Evans Summit for Investigative Journalism, Expedia Group chairman Barry Diller spoke of his optimism about AI technologies and, in particular, of his faith in one man – Sam Altman. “Here you’ve got somebody who is so purely motivated, who is not economically driven and who understands the dangers that are in front of AI,” Diller said. “Unfortunately, or fortunately… he is hardly the only player here.” Sam Altman is CEO of Open AI and sits on Expedia Group’s board. Under his stewardship, OpenAI launched ChatGPT, the chatbot that has made AI more accessible to the public than ever. It was no surprise, then, that when he gave evidence to members of the US Senate about the future of AI, everyone was keen to hear what he had to say. During Tuesday’s three-hour hearing, the OpenAI executive described the incredible potential of the technology that underpins his company, saying it could be used to cure cancer or combat climate change. But he also outlined the risks – including misinformation, fraud, job automation, the exploitation of women, and copyright infringement. “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman told the Senate. “We want to work with the government to prevent that from happening.” Altman proposed that lawmakers introduce regulation “urgently”, and set out a three-point plan for them to do so. He first suggested that the US government create a federal agency that can grant  – and revoke – licences to companies building AI models of a certain size.  He also called for a legally binding set of safety guidelines for AI models. Finally, Altman proposed that independent, third-party auditors review AI tools produced by companies. None of these are new ideas. As AI technologies have developed, governments have discussed ways to regulate it, but plans are only just starting to emerge.  The EU has already issued strict guidelines for the use of artificial intelligence – including large language models and generative AI – and the UK’s Competition and Markets Authority is planning a review of the AI market. The US Senate was broadly receptive to Sam Altman’s suggestions. They praised his commitment to safety – and the fact that, on the surface, he doesn’t seem to be motivated by profit, because he is not a majority shareholder in OpenAI.  That may be true, but Altman is, at his core, a businessman, and it is in his interest to make the company profitable. If he collaborates with US lawmakers, he will have a say in the regulations that ultimately govern OpenAI.

thinkin

The Global AI Summit

At the Sir Harry Evans Summit for Investigative Journalism, Expedia Group chairman Barry Diller spoke of his optimism about AI technologies and, in particular, of his faith in one man – Sam Altman. “Here you’ve got somebody who is so purely motivated, who is not economically driven and who understands the dangers that are in front of AI,” Diller said. “Unfortunately, or fortunately… he is hardly the only player here.” Sam Altman is CEO of Open AI and sits on Expedia Group’s board. Under his stewardship, OpenAI launched ChatGPT, the chatbot that has made AI more accessible to the public than ever. It was no surprise, then, that when he gave evidence to members of the US Senate about the future of AI, everyone was keen to hear what he had to say. During Tuesday’s three-hour hearing, the OpenAI executive described the incredible potential of the technology that underpins his company, saying it could be used to cure cancer or combat climate change. But he also outlined the risks – including misinformation, fraud, job automation, the exploitation of women, and copyright infringement. “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman told the Senate. “We want to work with the government to prevent that from happening.” Altman proposed that lawmakers introduce regulation “urgently”, and set out a three-point plan for them to do so. He first suggested that the US government create a federal agency that can grant  – and revoke – licences to companies building AI models of a certain size.  He also called for a legally binding set of safety guidelines for AI models. Finally, Altman proposed that independent, third-party auditors review AI tools produced by companies. None of these are new ideas. As AI technologies have developed, governments have discussed ways to regulate it, but plans are only just starting to emerge.  The EU has already issued strict guidelines for the use of artificial intelligence – including large language models and generative AI – and the UK’s Competition and Markets Authority is planning a review of the AI market. The US Senate was broadly receptive to Sam Altman’s suggestions. They praised his commitment to safety – and the fact that, on the surface, he doesn’t seem to be motivated by profit, because he is not a majority shareholder in OpenAI.  That may be true, but Altman is, at his core, a businessman, and it is in his interest to make the company profitable. If he collaborates with US lawmakers, he will have a say in the regulations that ultimately govern OpenAI.