AI large language models, including the latest models from OpenAI and Google, continue to espouse “covertly” racist attitudes, according to a recent study. Researchers from Stanford and Oxford University found that while recent models generally avoided overtly negative racist stereotypes, they displayed “dialect prejudice” – meaning models were significantly more likely to apply negative attributes like “dirty”, “lazy” or “stupid” when judging speakers using African-American English (AAE), associating them with less prestigious occupations or criminality. Until now AI companies have mostly focused on fixing examples of overt racism, playing whac-a-mole with racist chatbot responses to avoid bad PR. But covert racism is more dangerous for real-world AI applications, such as introducing racial discrimination into automated job application screening. The solution might be better underlying training data. Experts at Tortoise’s AI forum last week highlighted the importance of AI developers curating high-quality training datasets rather than just focusing on dataset size, removing racial bias from the data at source.