GithubHelp home page GithubHelp logo

zhangjf2018 / adversarial-attack-on-person-reid-with-deep-mis-ranking Goto Github PK

View Code? Open in Web Editor NEW

This project forked from whj363636/adversarial-attack-on-person-reid-with-deep-mis-ranking

0.0 1.0 0.0 1.53 MB

This is a pytorch implementation of the CVPR2020 paper: Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

License: MIT License

Python 100.00%

adversarial-attack-on-person-reid-with-deep-mis-ranking's Introduction

Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking

This is the code for the CVPR'20 paper "Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking." by Hongjun Wang, Guangrun Wang, Ya Li, Dongyu Zhang, Liang Lin.

Prerequisites

  • Python2 / Python3
  • Pytorch0.4.1 (do not test for >=Pytorch1.0)
  • CUDA
  • Numpy
  • Matplotlib
  • Scipy

Prepare data

Create a directory to store reid datasets under this repo

mkdir data/

If you wanna store datasets in another directory, you need to specify --root path_to_your/data when running the training code. Please follow the instructions below to prepare each dataset. After that, you can simply do -d the_dataset when running the training code.

Market1501 :

  1. Download dataset to data/ from http://www.liangzheng.org/Project/project_reid.html.
  2. Extract dataset and rename to market1501. The data structure would look like:
market1501/
    bounding_box_test/
    bounding_box_train/
    ...
  1. Use -d market1501 when running the training code.

CUHK03 [13]:

  1. Create a folder named cuhk03/ under data/.
  2. Download dataset to data/cuhk03/ from http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html and extract cuhk03_release.zip, so you will have data/cuhk03/cuhk03_release.
  3. Download new split [14] from person-re-ranking. What you need are cuhk03_new_protocol_config_detected.mat and cuhk03_new_protocol_config_labeled.mat. Put these two mat files under data/cuhk03. Finally, the data structure would look like
cuhk03/
    cuhk03_release/
    cuhk03_new_protocol_config_detected.mat
    cuhk03_new_protocol_config_labeled.mat
    ...
  1. Use -d cuhk03 when running the training code. In default mode, we use new split (767/700). If you wanna use the original splits (1367/100) created by [13], specify --cuhk03-classic-split. As [13] computes CMC differently from Market1501, you might need to specify --use-metric-cuhk03 for fair comparison with their method. In addition, we support both labeled and detected modes. The default mode loads detected images. Specify --cuhk03-labeled if you wanna train and test on labeled images.

DukeMTMC-reID [16, 17]:

  1. Create a directory under data/ called dukemtmc-reid.
  2. Download dataset DukeMTMC-reID.zip from https://github.com/layumi/DukeMTMC-reID_evaluation#download-dataset and put it to data/dukemtmc-reid. Extract the zip file, which leads to
dukemtmc-reid/
    DukeMTMC-reid.zip # (you can delete this zip file, it is ok)
    DukeMTMC-reid/ # this folder contains 8 files.
  1. Use -d dukemtmcreid when running the training code.

MSMT17 [22]:

  1. Create a directory named msmt17/ under data/.
  2. Download dataset MSMT17_V1.tar.gz to data/msmt17/ from http://www.pkuvmc.com/publications/msmt17.html. Extract the file under the same folder, so you will have
msmt17/
    MSMT17_V1.tar.gz # (do whatever you want with this .tar file)
    MSMT17_V1/
        train/
        test/
        list_train.txt
        ... (totally six .txt files)
  1. Use -d msmt17 when running the training code.

Prepare pretrained ReID models

  1. Create a directory to store reid pretrained models under this repo
mkdir models/
  1. Download the pretrained models or train the models from scratch by yourself offline

    2.1 Download Links

    IDE

    DenseNet121

    AlignedReID

    PCB

    Mudeep

    HACNN

    CamStyle

    LSRO

    HHL

    SPGAN

    2.2 Training models from scratch (optional)

    Create a directory named by the targeted model (like aligned/ or hacnn/) following __init__.pyunder models/ and move the checkpoint of pretrained models to this directory. Details of naming rules can refer to the download link.

  2. Customized ReID models (optional)

    It is easy to test the robustness of any customized ReID models following the above steps (1→2.2→3). The extra thing you need to do is to add the structure of your own models to models/ and register it in__init__.py .

Train

Take attacking AlignedReID trained on Market1501 as an example:

python train.py \
  --targetmodel='aligned' \
  --dataset='market1501'\
  --mode='train' \
  --loss='xent_htri' \
  --ak_type=-1 \
  --temperature=-1 \
  --use_SSIM=2 \
  --epoch=40

Test

Take attacking AlignedReID trained on Market1501 as an example:

python train.py \
  --targetmodel='aligned' \
  --dataset='market1501'\
  --G_resume_dir='./logs/aligned/market1501/best_G.pth.tar' \
  --mode='test' \
  --loss='xent_htri' \
  --ak_type=-1 \
  --temperature=-1 \
  --use_SSIM=2 \
  --epoch=40

Results

Reminders

  1. If you are using your own trained ReID models (no matter whether they are customized), be careful about the name of variables and properly change or hold Line 38–53 in __init__.py (adaptation to early Pytorch0.3 trained models).
  2. You may notice some arguments and codes involve the attribute information, if you are interested in that you can easily find and download the extra attribute files about Market1501 or DukeMTMC. We have conducted some related experiments about attribute attack but it is not the main content of this paper so I delete that part of code.

Reference

If you are interested in our work, please consider citing our paper.

@InProceedings{Wang_2020_CVPR,
author = {Wang, Hongjun and Wang, Guangrun and Li, Ya and Zhang, Dongyu and Lin, Liang},
title = {Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
} 

Acknowledgements

Thanks for the following excellent works:

adversarial-attack-on-person-reid-with-deep-mis-ranking's People

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.