17 Comments
User's avatar
⭠ Return to thread
Alex S's avatar

> A new ‘data poisoning tool’ lets artists poison their digital art to prevent AI from using it. Essentially, the tool takes your art, changes a bunch of pixels in a way that are invisible to the human eye but legible to AI systems.

It doesn't really work though, and there's no way it could ever work in general.

Expand full comment
Jeremiah Johnson's avatar

Can you elaborate on that? Their test run seemed convincing.

Expand full comment
Alex S's avatar

This is an example of an "adversarial attack". Basically you can always find one against any ML model; it seems to be impossible to design one that can't be tricked into thinking a cat is a giraffe or something, with a picture that looks no different to humans.

But the same goes the other way round. If you poison all the images, someone can always make a new model that handles them fine. Basically whoever's newest has the advantage.

Some other attacks I've seen are pretty easy to remove from images too, like if you make a few variants with filters on them (called "augmentation") it cleans it off.

Expand full comment