Comments (2)
Good points!
To start with, our defense system is designed to safeguard the public image of a target subject only. Even though the system is trained using perturbed images of the target subject, it is expected to have no effects on generated images of other subjects. Therefore, we cannot be held accountable for your initial concern.
Additionally, embedding EXIF into poisoning data has the potential to reduce the effectiveness of our defense system, as malicious attackers may easily discard all of our protected images. However, in some ideal situations where all images of the target subject are contaminated, it can still be beneficial, as there will be no images left when the attackers remove them from the collection. It is important to note, however, that such ideal cases are infrequent in real-world scenarios, as the number of perturbed images can vary widely depending on individual usage. Meanwhile, responsibility for this action should be delegated to owners, policy and license makers, and organizations who intentionally manipulate data for specific purposes.
It is worth mentioning that our work is motivated by a strong desire to protect users' images against malicious threats. Specifically, our focus is on promoting research dedicated to safeguarding individual images in light of the growing prevalence of personalized generative models.
We will close this issue as it does not align with our research focus.
from anti-dreambooth.
"Additionally, embedding EXIF into poisoning data has the potential to reduce the effectiveness of our defense system, as malicious attackers may easily discard all of our protected images."
Isn't that the point? Get them to discard all of your images; to give people making the images the ability to declare that they don't want it used for training and generation?
If that isn't the point and you're purposely poisoning image training in general then the purpose of your research is flawed.
"To start with, our defense system is designed to safeguard the public image of a target subject only. Even though the system is trained using perturbed images of the target subject, it is expected to have no effects on generated images of other subjects. Therefore, we cannot be held accountable for your initial concern."
It might be an important research topic that would be easy to test. Have a dataset of faces. perturb half of the faces and train the model. Confirm that the perturbed faces generate with artifacts. Confirm that the unperturbed faces generate without artifacts.
"We will close this issue as it does not align with our research focus."
All AI research, at this point, needs to have a section on the ethical issues associated with the work that you're doing and the problems that you're trying to solve. This absolutely aligns with your research focus unless it isn't scientific. If this isn't scientific research and is just a reactionary thing, "AI Bros bad we try to stop" then fine... but don't put out a scientific paper for it and claim you're actually doing research.
My point: You seem to have written a paper and developed something that has some marginally helpful benefits but is also incredibly, arguably "purposely", malicious. You, also, didn't discuss, or even think about, the ethical ramifications of using this.
It takes a significant amount of money to train an image generation model. If an image generated by your method sneaks its way into a training, as you clearly intend, and damage to the model occurs, who will be liable for the damage? The person using this repository to try and protect their face? What if someone uses this software and perturbs images of common objects like apples or another fruit and then spreads them all over the internet? These issues are completely ignored in the paper and the code.
I attempted to help by addressing this by adding Exif information in the perturbed images so it would mitigate the maliciousness and provide a way for you to address the 'liability factor' for people who use this software.
You closed as out of scope.
from anti-dreambooth.
Related Issues (19)
- What happens when just run it through a noise filter like this? HOT 1
- Bugs. HOT 2
- Does anti still work when using Diffpure[1]? HOT 2
- Questions about full split sets of each dataset. HOT 2
- Hello,I have a question HOT 2
- Questions about quantitative evaluation metrics HOT 1
- ASPL script fails training on SD 1.4. HOT 6
- fail to generate adversial samples when running attack_aspl script HOT 2
- how to evaluate with FDFR and ISM
- ser_fiq.py mxnet load error HOT 1
- can you provide some perturbed images to test? HOT 2
- About the entire data for training
- TypeError: __init__() got an unexpected keyword argument 'debug'. HOT 1
- Failed to train and output noise-ckpt in Google Colab HOT 2
- can it be used on specific pictures? HOT 6
- could not find version of module HOT 9
- Robusness under image filter(Guassian blur) HOT 1
- LDSR, ESRGAN_4x,... ?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from anti-dreambooth.