GithubHelp home page GithubHelp logo

fact's Introduction

FACT

This repo provides a demo for the CVPR 2021 paper "A Fourier-based Framework for Domain Generalization" on the PACS dataset.

To cite, please use:

@InProceedings{Xu_2021_CVPR,
    author    = {Xu, Qinwei and Zhang, Ruipeng and Zhang, Ya and Wang, Yanfeng and Tian, Qi},
    title     = {A Fourier-Based Framework for Domain Generalization},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {14383-14392}
}

Requirements

  • Python 3.6
  • Pytorch 1.1.0

Evaluation

Firstly create directory ckpt/ and drag your model under it. For running the evaluation code, please download the PACS dataset from http://www.eecs.qmul.ac.uk/~dl307/project_iccv2017. Then update the files with suffix _test.txt in data/datalists for each domain, following styles below:

/home/user/data/images/PACS/kfold/art_painting/dog/pic_001.jpg 0
/home/user/data/images/PACS/kfold/art_painting/dog/pic_002.jpg 0
/home/user/data/images/PACS/kfold/art_painting/dog/pic_003.jpg 0
...

Once the data is prepared, remember to update the path of test files and output logs in shell_test.py:

input_dir = 'path/to/test/files'
output_dir = 'path/to/output/logs'

then simply run:

 python shell_test.py -d=art_painting

You can use the argument -d to specify the held-out target domain.

Training from scratch

After downloading the dataset, update the files with suffix _train.txt and _val.txt in data/datalists for each domain, following the same styles as the test files above. Please make sure you are using the official train-val-split. Then update the the path of train&val files and output logs in shell_train.py:

input_dir = 'path/to/train/files'
output_dir = 'path/to/output/logs'

Then running the code:

python shell_train.py -d=art_painting

Use the argument -d to specify the held-out target domain.

Acknowledgement

Part of our code is borrowed from the following repositories.

  • JigenDG: "Domain Generalization by Solving Jigsaw Puzzles", CVPR 2019
  • DDAIG: "Deep Domain-Adversarial Image Generation for Domain Generalisation", AAAI 2020

We thank to the authors for releasing their codes. Please also consider citing their works.

fact's People

Contributors

kveinxu avatar

Stargazers

Ruizi Yang avatar  avatar  avatar  avatar Levite.Lou avatar  avatar SnowCharm avatar hiyyg avatar Minsoo Kang avatar  avatar He Li avatar  avatar  avatar  avatar  avatar Zhuoran Zhao avatar  avatar cz avatar  avatar Chengfeng Zhou avatar Xu Xiaoping avatar  avatar  avatar wyczzy avatar 394481125 avatar Wongi Park avatar YJ Zhang avatar  avatar  avatar Yin Zhang avatar  avatar  avatar  avatar  avatar  avatar Gao Changlong avatar  avatar  avatar Yunhoe, Ku avatar  avatar Xiaoman Zhang avatar 白花遥 avatar  avatar Chenyang Yu avatar Xi Zhang avatar Zhikai Zhong avatar Jinx avatar justin avatar Min Jeong Park avatar  avatar Yinghao Zhu avatar Ziv  avatar Sanqing Qu avatar Shiyu Xuan avatar S.PO.I.L.E.R avatar  avatar Asura avatar leefly avatar Chao He avatar  avatar yoona avatar Masato Saeki  avatar  avatar Race Wang avatar yuyan li avatar Jacob A Rose avatar Hossein Kashiani avatar Dorra avatar  avatar Rui Zhang avatar  avatar  avatar Wentian avatar VIM609 avatar Xiao Liu(刘骁) avatar  avatar Dongyang Li avatar xstarfish avatar Yongha Kwon avatar Kai Yao avatar  avatar Dai YL avatar Daniel123 avatar  avatar Jonas avatar Zhang Jiawei avatar  avatar Yufei Wang avatar  avatar  avatar  avatar  avatar Antinomy avatar  avatar Sage☘ avatar Chen Xiaoyu avatar  avatar Zhipeng Huang avatar  avatar ZhangRuipeng avatar

Watchers

James Cloos avatar

fact's Issues

About the division of OfficeHome dataset

Thanks for your work. There is no official division of the OfficeHome dataset. How do you divide OfficeHome into a training set and a validation set. Is the result in your paper the result of the test set corresponding to the highest accuracy of the validation set? Thanks

On the model selection

In your paper, you write: "we conduct the leave-one-domain-out evaluation. We train our model on the training splits and select the best model on the validation splits of all source domains"

But, in my opinion and according to the description in "https://github.com/facebookresearch/DomainBed", "select the best model on the validation splits of all source domains" is the "Training-domain validation" or "IIDAccuracySelectionMethod", rather than the " leave-one-domain-out evaluation".

What's your opiniton? Thanks for your reply.

Code for generating phase-only reconstruction

In the paper, you mentioned "we first generate phase-only reconstructed images by setting the amplitude component as a constant.".
However, I cannot understand how you set the amplitude component as a constant.
Whenever I tried that way, it gives me the black image.
Can you provide implementations for generating phase-only recon images?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.