GithubHelp home page GithubHelp logo

Add Exif information or another indicator that technology is used to try and prevent AI training using this image. about anti-dreambooth HOT 2 CLOSED

Teravus avatar Teravus commented on June 1, 2024
Add Exif information or another indicator that technology is used to try and prevent AI training using this image.

from anti-dreambooth.

Comments (2)

hao-pt avatar hao-pt commented on June 1, 2024

Good points!

To start with, our defense system is designed to safeguard the public image of a target subject only. Even though the system is trained using perturbed images of the target subject, it is expected to have no effects on generated images of other subjects. Therefore, we cannot be held accountable for your initial concern.

Additionally, embedding EXIF into poisoning data has the potential to reduce the effectiveness of our defense system, as malicious attackers may easily discard all of our protected images. However, in some ideal situations where all images of the target subject are contaminated, it can still be beneficial, as there will be no images left when the attackers remove them from the collection. It is important to note, however, that such ideal cases are infrequent in real-world scenarios, as the number of perturbed images can vary widely depending on individual usage. Meanwhile, responsibility for this action should be delegated to owners, policy and license makers, and organizations who intentionally manipulate data for specific purposes.

It is worth mentioning that our work is motivated by a strong desire to protect users' images against malicious threats. Specifically, our focus is on promoting research dedicated to safeguarding individual images in light of the growing prevalence of personalized generative models.

We will close this issue as it does not align with our research focus.

from anti-dreambooth.

Teravus avatar Teravus commented on June 1, 2024

"Additionally, embedding EXIF into poisoning data has the potential to reduce the effectiveness of our defense system, as malicious attackers may easily discard all of our protected images."

Isn't that the point? Get them to discard all of your images; to give people making the images the ability to declare that they don't want it used for training and generation?

If that isn't the point and you're purposely poisoning image training in general then the purpose of your research is flawed.

"To start with, our defense system is designed to safeguard the public image of a target subject only. Even though the system is trained using perturbed images of the target subject, it is expected to have no effects on generated images of other subjects. Therefore, we cannot be held accountable for your initial concern."
It might be an important research topic that would be easy to test. Have a dataset of faces. perturb half of the faces and train the model. Confirm that the perturbed faces generate with artifacts. Confirm that the unperturbed faces generate without artifacts.

"We will close this issue as it does not align with our research focus."
All AI research, at this point, needs to have a section on the ethical issues associated with the work that you're doing and the problems that you're trying to solve. This absolutely aligns with your research focus unless it isn't scientific. If this isn't scientific research and is just a reactionary thing, "AI Bros bad we try to stop" then fine... but don't put out a scientific paper for it and claim you're actually doing research.

My point: You seem to have written a paper and developed something that has some marginally helpful benefits but is also incredibly, arguably "purposely", malicious. You, also, didn't discuss, or even think about, the ethical ramifications of using this.

It takes a significant amount of money to train an image generation model. If an image generated by your method sneaks its way into a training, as you clearly intend, and damage to the model occurs, who will be liable for the damage? The person using this repository to try and protect their face? What if someone uses this software and perturbs images of common objects like apples or another fruit and then spreads them all over the internet? These issues are completely ignored in the paper and the code.

I attempted to help by addressing this by adding Exif information in the perturbed images so it would mitigate the maliciousness and provide a way for you to address the 'liability factor' for people who use this software.
You closed as out of scope.

from anti-dreambooth.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.