Join us Read
Listen
Watch
Book
Technology AI, Science and New Things

Artists use “poisoning” tool to fight AI copyright infringement

Anew tool is helping artists fight back against AI copyright infringement, by “poisoning” and corrupting any model trained on their artwork without consent. “Nightshade”, released by researchers at the University of Chicago, adds invisible pixel changes to an image. It’s invisible to humans but fundamentally scrambles how AI sees the image. A few thousand poisoned images in a training dataset of billions can cause the model to break in chaotic and unexpected ways. It’s not just AI sabotage. The aim is to shift the balance of power back to artists, by increasing the “cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative”. Copyright lawsuits filed by creators against generative image AI companies like Stability AI and Midjourney have so far had little success, but companies might think twice about scraping images from the web that wreck their expensive models. Nightshade has already been downloaded a quarter of a million times in the past week.


Enjoyed this article?

Sign up to the Daily Sensemaker Newsletter

A free newsletter from Tortoise. Take once a day for greater clarity.



Tortoise logo

A free newsletter from Tortoise. Take once a day for greater clarity.



Tortoise logo

Download the Tortoise App

Download the free Tortoise app to read the Daily Sensemaker and listen to all our audio stories and investigations in high-fidelity.

App Store Google Play Store

Follow:


Copyright © 2025 Tortoise Media

All Rights Reserved