Mon. Jul 1st, 2024

Artists Can Now Sabotage AI Image Generators to Fight Art Theft<!-- wp:html --><p>Photo Illustration by Luis G. Rendon/The Daily Beast/Getty</p> <p>There’s a big sense of hopelessness for artists who don’t want their work to be used to train <a href="http://thedailybeast.com/keyword/artificial-intelligence">AI</a> image generators. While some creators have attempted to <a href="https://apnews.com/article/artists-ai-image-generators-stable-diffusion-midjourney-7ebcb6e6ddca3f165a3065c70ce85904">fight back in the courtroom</a>, there’s simply not a lot they can do to prevent developers from going online to websites like DeviantArt and Tumblr and <a href="https://www.thedailybeast.com/how-lensa-ai-and-image-generators-steal-from-artists">using the publicly available work to create their models</a>.</p> <p>That’s why a team of researchers at the University of Chicago have developed a tool that they say is capable of “poisoning” these AI image generators <em>using the artists’ own work. </em>The software, first reported by <a href="https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/"><em>MIT Tech Review</em></a><em>,</em> is dubbed “Nightshade.” It works by changing the very pixels of the art so that it causes any model that trains on it to create unpredictable and faulty images.</p> <p>Of course, this is contingent on the fact that many artists would need to utilize this tool. Still, the authors note that it offers a proactive approach to fighting back against AI art theft—while creating a unique deterrence to training these models on artwork without the permission of the artist.</p> <p><a href="https://www.thedailybeast.com/artists-can-now-sabotage-ai-image-generators-to-fight-art-theft">Read more at The Daily Beast.</a></p><!-- /wp:html -->

Photo Illustration by Luis G. Rendon/The Daily Beast/Getty

There’s a big sense of hopelessness for artists who don’t want their work to be used to train AI image generators. While some creators have attempted to fight back in the courtroom, there’s simply not a lot they can do to prevent developers from going online to websites like DeviantArt and Tumblr and using the publicly available work to create their models.

That’s why a team of researchers at the University of Chicago have developed a tool that they say is capable of “poisoning” these AI image generators using the artists’ own work. The software, first reported by MIT Tech Review, is dubbed “Nightshade.” It works by changing the very pixels of the art so that it causes any model that trains on it to create unpredictable and faulty images.

Of course, this is contingent on the fact that many artists would need to utilize this tool. Still, the authors note that it offers a proactive approach to fighting back against AI art theft—while creating a unique deterrence to training these models on artwork without the permission of the artist.

Read more at The Daily Beast.

By