Anew tool is helping artists fight back against AI copyright infringement, by “poisoning” and corrupting any model trained on their artwork without consent. “Nightshade”, released by researchers at the University of Chicago, adds invisible pixel changes to an image. It’s invisible to humans but fundamentally scrambles how AI sees the image. A few thousand poisoned images in a training dataset of billions can cause the model to break in chaotic and unexpected ways. It’s not just AI sabotage. The aim is to shift the balance of power back to artists, by increasing the “cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative”. Copyright lawsuits filed by creators against generative image AI companies like Stability AI and Midjourney have so far had little success, but companies might think twice about scraping images from the web that wreck their expensive models. Nightshade has already been downloaded a quarter of a million times in the past week.