Photo Illustration by Luis G. Rendon/The Daily Beast/Getty
There’s a big sense of hopelessness for artists who don’t want their work to be used to train AI image generators. While some creators have attempted to fight back in the courtroom, there’s simply not a lot they can do to prevent developers from going online to websites like DeviantArt and Tumblr and using the publicly available work to create their models.
That’s why a team of researchers at the University of Chicago have developed a tool that they say is capable of “poisoning” these AI image generators using the artists’ own work. The software, first reported by MIT Tech Review, is dubbed “Nightshade.” It works by changing the very pixels of the art so that it causes any model that trains on it to create unpredictable and faulty images.
Of course, this is contingent on the fact that many artists would need to utilize this tool. Still, the authors note that it offers a proactive approach to fighting back against AI art theft—while creating a unique deterrence to training these models on artwork without the permission of the artist.