Join us Read
Listen
Watch
Book
Technology AI, Science and New Things

The dangers of “covertly” racist AI models

AI large language models, including the latest models from OpenAI and Google, continue to espouse “covertly” racist attitudes, according to a recent study. Researchers from Stanford and Oxford University found that while recent models generally avoided overtly negative racist stereotypes, they displayed “dialect prejudice” – meaning models were significantly more likely to apply negative attributes like “dirty”, “lazy” or “stupid” when judging speakers using African-American English (AAE), associating them with less prestigious occupations or criminality. Until now AI companies have mostly focused on fixing examples of overt racism, playing whac-a-mole with racist chatbot responses to avoid bad PR. But covert racism is more dangerous for real-world AI applications, such as introducing racial discrimination into automated job application screening. The solution might be better underlying training data. Experts at Tortoise’s AI forum last week highlighted the importance of AI developers curating high-quality training datasets rather than just focusing on dataset size, removing racial bias from the data at source.


Enjoyed this article?

Sign up to the Daily Sensemaker Newsletter

A free newsletter from Tortoise. Take once a day for greater clarity.



Tortoise logo

A free newsletter from Tortoise. Take once a day for greater clarity.



Tortoise logo

Download the Tortoise App

Download the free Tortoise app to read the Daily Sensemaker and listen to all our audio stories and investigations in high-fidelity.

App Store Google Play Store

Follow:


Copyright © 2026 Tortoise Media

All Rights Reserved