An AI Poison Pill
posted October 30, 2023 #
A professor at the University of Chicago created a new tool called "Nightshade" which functions as a poison pill for generative AI when scraping your images for training data. The idea is that by injecting a special set of pixels and invisible data into an image you could cause an AI to misinterpret your image, thus saving it from being used correctly for training.
I've looked over research preview and agree with the idea of the tool - it'd be nice for artists to be able to opt-in or out of training data - but I'm not convinced this would actually work.
My skepticism aside, it's a topic worth pondering in more depth. Obviously AI is skirting a lot of copyright issues and potentially hurting a lot of artists in a myriad of ways. That's not to say it's a black and white issue - AI is beneficial as well - but this is an area that is a brave new world. How can we train these machines to be useful but also provide credit to those created the foundation? I fear the answer may be "legislation" (of which I am even more skeptical) but we'll just have to wait and see.
I've looked over research preview and agree with the idea of the tool - it'd be nice for artists to be able to opt-in or out of training data - but I'm not convinced this would actually work.
My skepticism aside, it's a topic worth pondering in more depth. Obviously AI is skirting a lot of copyright issues and potentially hurting a lot of artists in a myriad of ways. That's not to say it's a black and white issue - AI is beneficial as well - but this is an area that is a brave new world. How can we train these machines to be useful but also provide credit to those created the foundation? I fear the answer may be "legislation" (of which I am even more skeptical) but we'll just have to wait and see.