GithubHelp home page GithubHelp logo

kakaoenterprise / learning-debiased-disentangled Goto Github PK

View Code? Open in Web Editor NEW
94.0 94.0 8.0 1.64 MB

Official Pytorch implementation of "Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral)"

Python 93.90% Shell 6.10%

learning-debiased-disentangled's People

Contributors

eungyeupkim avatar leebebeto avatar zealota avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

learning-debiased-disentangled's Issues

Results on BAR dataset?

It appears that the result of the BAR dataset was in the paper during the submission phase(https://openreview.net/forum?id=-oUhJJILWHb) and was also appreciated by the reviewers. At the same time, it was removed from the final camera-ready version. Is there any particular reason for this? Could you provide the code for replicating the result on the BAR dataset?

typo in readme

At the dataset files description of cmnist and cifar10c
there's typo conlict -> conflict

How can I calculate the unbiased accuracy?

Hi,
Thanks for sharing the code, it's really helpful. From your code, I can get the acc of bias align, bias conflic and total acc. But could I know how can I calculate the unbiased accuracy? Thanks!

Intuition on GCE

Hello dear authors,

I was wondering if I could have some clarifications on GCE.

  1. GCE is almost an interpolation between CCE and MAE. Their respective derivates are the following:
    image
  2. While CCE dynamically attributes different "weight" (1/f(x)) according to the difficulty of the sample, MAE is uniform across all samples.
  3. Usually, "difficult" samples are bias-conflicting samples. Thus, the gradient is higher CCE for this scenario.
  4. On the other hand, GCE attributes less gradient in the same scenario. Thus, GCE is less attentive to "difficult" samples, which in turn allocates more attention to "easier" samples in expectation. Therefore, GCE is less focused on intrinsic attributes and seeks after shortcuts

The reason I doubt my statement is due to your clarifications on GCE in OpenReview.
image
I thought GCE was more attentive to the bias element not because it emphasizes the easy samples but de-emphasizes the hard ones.

Thank you for your attention and great paper!

Non-decreasing Training Loss

Hi,

Thanks for open-sourced the code! I made a local copy of the code base as well as the CMNIST dataset. Simply by running the model of MLP and MLP_disentangled with provided Linux command lines as instructed in README. The training loss of either case is not decreasing and hence the test accuracy is similar with a naive classifier (~10%). Could you look into this issue?

Thank you!

Validation set for CMNIST

Dear authors,
I want to know the data setting of validation set for CMNIST. Does the validation set have the same bias degree with training set? Thanks in advance.

A question about the experiment result.

When I run the command

python train.py --dataset cmnist --exp=cmnist_0.5_ours --lr=0.01 --percent=0.5pct --curr_step=10000 --lambda_swap=1 --lambda_dis_align=10 --lambda_swap_align=10 --use_lr_decay --lr_decay_step=10000 --lr_gamma=0.5 --train_ours --tensorboard

the output is

...
valid_d: 0.11319999396800995 || test_d: 0.11349999904632568 
 97%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████▌   | 48496/50000 [12:32<00:16, 88.77it/s
48500 model saved ...                                                                                                                                         
valid_d: 0.11319999396800995 || test_d: 0.11349999904632568 
 98%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████▋  | 48996/50000 [12:40<00:10, 96.10it/s]
49000 model saved ...
                                                                                                                                                            
49000 model saved ...                                                                                                                                         
valid_d: 0.11319999396800995 || test_d: 0.11349999904632568 
 99%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 49500/50000 [12:49<00:07, 71.10it/s
49500 model saved ...                                                                                                                                         
valid_d: 0.11319999396800995 || test_d: 0.11349999904632568 

The test acc is 0.1135. However in the paper, the result is 65.22. I don't know what's wrong. Can anyone help me?

Code for data generating

Hi,
Thanks for open-sourced the code! Can you please make the data generating code public? I hope to further modify the dataset to test the relationship between bias rate and performance. Your code seems clear from the released dataset, I would love to refer to it.

Official method name

Dear authors,
What's the official name for your method? I cannot find the name in your paper. I want to compare your method in my paper.
Thanks in advance.

Shaohua

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.