GithubHelp home page GithubHelp logo

shivangi-aneja / tafim Goto Github PK

View Code? Open in Web Editor NEW
46.0 1.0 2.0 453 KB

[ECCV 2022] TAFIM: Targeted Adversarial Attacks against Facial Image Manipulation

License: Other

Python 94.05% C++ 0.91% Cuda 5.05%
adversarial-attacks computer-vision deep-learning deepfake-detection deepfakes image-manipulation image-manipulation-detection generative-adversarial-network image-translation stylegan2 eccv2022 eccv eccv-2022 styleclip

tafim's Introduction

TAFIM: Targeted Adversarial Attacks for Facial Forgery Detection
Official PyTorch implementation of ECCV 2022 paper

Teaser Image

Targeted Adversarial Attacks for Facial Forgery Detection
Shivangi Aneja, Lev Markhasin, Matthias Niessner
https://shivangi-aneja.github.io/projects/tafim

Abstract: Face manipulation methods can be misused to affect an individual’s privacy or to spread disinformation. To this end, we introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images. The key idea is that these protected images prevent face manipulation by causing the manipulation model to produce a predefined manipulation target (uniformly colored output image in our case) instead of the actual manipulation. In addition, we propose to leverage differentiable compression approximation, hence making generated perturbations robust to common image compression. In order to prevent against multiple manipulation methods simultaneously, we further propose a novel attention-based fusion of manipulation-specific perturbations. Compared to traditional adversarial attacks that optimize noise patterns for each image individually, our generalized model only needs a single forward pass, thus running orders of magnitude faster and allowing for easy integration in image processing stacks, even on resource-constrained devices like smartphones.

Getting started

Pre-requisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN
  • Python 3.X

Installation

  • Dependencies:
    It is recommended to install all dependecies using pip The dependencies for defining the environment are provided in requirements.txt.

Pre-trained Models

Please download these models, as they will be required for experiments.

Path Description
pSp Encoder pSp trained with the FFHQ dataset for StyleGAN inversion.
StyleClip StyleClip trained with the FFHQ dataset for text-manipulation (Afro, Angry, Beyonce, BobCut, BowlCut, Curly Hair, Mohawk, Purple Hair, Surprised, Taylor Swift, Trump, zuckerberg )
SimSwap SinSwap trained for face-swapping
SAM SAM model trained for age transformation (used in supp. material).
StyleGAN-NADA StyleGAN-Nada models (used in supp. material).

Training models

The code is well-documented and should be easy to follow.

  • Source Code: $ git clone this repo and install the Python dependencies from requirements.txt. The source code is implemented in PyTorch so familarity with PyTorch is expected.
  • Dataset: We used FFHQ dataset for our experiments. This is publicly available here. We divide FFHQ dataset into : Training (5000 images), Validation (1000 images) and Test (1000 images). We additionally used Celeb-HQ and VGGFace2-HQ dataset for some additional experiments. These datasets can be download from the respective websites. All images are resized to 256 X 256 during transform.
  • Manipulation Methods: We examine our method primarily on three popular manipulations: (1) Style-Mixing using pSp (2) Face-Swapping using SimSwap and (3) Textual Editing using StyleClip. This code heavily borrows from these repositories for implementation of these manipulations. Please setup and install dependencies for these methods from their original implementation. The scripts to check whether image manipulation model works can be found in manipulation_tests/ directory. Make sure that these scripts work and you are able to perform inference on these models.
  • Path Configuration: Configure the following paths before training.
    • Refer to configs/paths_config.py to define the necessary data paths and model paths for training and evaluation.
    • Refer to configs/transforms_config.py for the transforms defined for each dataset/experiment.
    • Refer to configs/common_config.py and change the architecture_type and dataset_type according to the experiment you wish to perform.
    • Finally, refer to configs/data_configs.py for the source/target data paths for the train and test sets as well as the transforms.
    • If you wish to experiment with your own dataset, you can simply make the necessary adjustments in
      1. data_configs.py to define your data paths.
      2. transforms_configs.py to define your own data transforms.
    • Refer to configs/attack_configs.py and change the net_noise to change the protection model architecture.
  • Training: The main training script to train the protection models for different configurations are directly available in directory trainer_scripts. To train the protection models, depending on the manipulation method execute the following commands
# For self-reconstruction/style-mixing task
python -m trainer_scripts.train_protection_model_pSp 

# For face-swapping task
python -m trainer_scripts.train_protection_model_simswap

# For textual editing task
python -m trainer_scripts.train_protection_model_styleclip

# For protection against Jpeg Compression
python -m trainer_scripts.train_protection_model_pSp_jpeg

# For combining perturbations from multiple manipulation methods 
python -m trainer_scripts.train_protection_model_all_attention
  • Evaluation: Once training is complete, then to evaluate, specify the path to protection model and evaulate. For instance, to evaluate for the self reconstruction task for pSp encoder, excecute:
    python -m testing_scripts.test_protection_model_pSp -p protection_model.pth

Citation

If you find our dataset or paper useful for your research , please include the following citation:


@InProceedings{aneja2022tafim,
               author="Aneja, Shivangi and Markhasin, Lev and Nie{\ss}ner, Matthias",
               title="TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations",
               booktitle="Computer Vision -- ECCV 2022",
               year="2022",
               publisher="Springer Nature Switzerland",
               address="Cham",
               pages="58--75",
               isbn="978-3-031-19781-9"
}

Contact Us

If you have questions regarding the dataset or code, please email us at [email protected]. We will get back to you as soon as possible.

tafim's People

Contributors

shivangi-aneja avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

wkyhit peterzs

tafim's Issues

Comparison of the perturbation loss between eq. (6) and (14)

Hello @shivangi-aneja, I have a question about the perturbation loss used for the single VS multiple manipulation methods.

In eq.14, you try to minimize $||\delta_i^{all}||_2$ but in eq.6, you try to minimize $||X_i^p-X_i||_2 + ||X_i^{Gp}-X_i||_2$. I understand that both formula are approximately the same since we approximately have $$||X_i^p-X_i||_2=||Clamp_{\epsilon}(X_i+\delta_i)-X_i||_2\approx||\delta_i||_2$$
If my above supposition is correct, why would you use $||\delta_i^{all}||_2$ in eq.14 instead of $||X_i^{all}-X_i||_2$ ?

I suppose that this is not a really important question since there wouldn't be much difference in the result, but still I'd want to know if there is a particular reason for doing that.

In any cases, thank you for this great work. I've learned a lot thanks to your paper.

Pretrained Model For TAFIM

Congratulation for your great work! May I know if you can release the pretrained model of TAFIM? It would be really helpful.

Arcface checkpoint

Hi aneja, Thanks for your amazing contribution. I would like to perform an attack on SimSwap, but I am having some problems with the format of the Arcface checkpoint provided by the SimSwap author as a tar file. Could you kindly provide me with the arcface.pth file as shown in your project configuration. Thanks a lot!

About the performance of the Attack on SimSwap

Hi aneja, I trained the Face Protection model together with the SimSwap model under the configuration like this:
net_noise = 'unet_64' (default) n_epochs = 100 perturb_wt=10 (default) lr = 0.0001 (default)
,due to the limitation of computing resources, I only used 2000 images from the CelebHQ dataset and trained the model for 100 epochs. The test results are shown below.
26
The second colum is the source image with pertubation and the noise seems quite obvious. I am wondering if this means I need to train for more epochs?
And the testing script I used is here: https://github.com/wkyhit/TAFIM/blob/main/testing_scripts/test_protection_model_SimSwap.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.