When ChatGPT and other AI chatbots generate text on an industrial scale, who moderates it? Who checks to see that the powerful engines are not spewing out texts inspired by the darkest places on the internet? For now, the tech giants’ answer seems to be low-paid East African content moderators who often end up traumatised by what they read. A WSJ investigation found that OpenAI, the parent company of ChatGPT, hired Kenyan workers on as little as $1.50 an hour to categorise thousands of passages about self-harm, child sexual abuse, bestiality and brutality. Without this human input, the technology would not exist today.
