GithubHelp home page GithubHelp logo

dsn's People

Contributors

fungtion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

dsn's Issues

Question about all of the loss

Thank you pytorch code. I want to ask some questions. First, I think the loss functions of " similarity" should be achieved by "DANN" or "MMD" ,but you use Crossentropyloss. Second, loss = loss_classification + loss_diff + loss_similarity + loss_recon, however you are achieve loss = loss_classification + loss_diff + loss_recon1 + loss_recon2. I don't know if I got it wrong, Please point out my fault.THX

multi-GPU

Did you try to make it work on multi gpus?
I tried to modified your codes for multi-gpu, but it always give me this error:

RuntimeError: arguments are located on different GPUs

Did you meet this issue? and how did you deal with that?
Thank you!

The private code is always zeros.

Hi,
I was able to run the code and get a good result. But I noticed that the diff_loss is always zero even though the implementation of the DiffLoss function seems to be correct. However, when I printed the private code, I found it always zeros.

It seems that the two most influential losses are the task loss and dann loss which means thaT you would get comparable results without having private and shared encoders.

The implementation of 'p' is not similar to the original DANN paper

DSN/train.py

Line 171 in b2f6955

p = float(i + (epoch - dann_epoch) * len_dataloader / (n_epoch - dann_epoch) / len_dataloader)

it should be:
p = float(i + (epoch - dann_epoch) * len_dataloader) / (n_epoch - dann_epoch) / len_dataloader

A sanity check can be p should vary from 0->1 during training. But according to implementation, p goes to much higher values (i + some fraction) where i is the number of batches during training in an epoch.

For reference, check line 94 in https://github.com/fungtion/DANN/blob/master/train/main.py

loss_diff decrease to zero very fast

Thanks for your nice codes!

During my training, the loss of differences decrease to 0 within the first 100 steps and remain 0 afterwards.

May I ask do you also encounter such case?

Thanks.

Question about DiffLoss

Hi, thanks for the nice PyTorch implementation.

I have some questions for the DiffLoss:

  • Shouldn't this line compute the correlation of each feature dimension of the private features to each feature dimension of the shared features? As it is now, it is computing the correlation of one sample to another. Correct me if I'm wrong, but shouldn't it be:
    torch.mean((input1_l2.t().mm(input2_l2).pow(2))) instead?

  • Also, you mention that there are some stability issues. Could that be because there is no mean value normalization, as done in the TF implementation of the authors?

Thanks a lot :)

Diff loss and dann loss are zeros for all epochs.

Hi,
The issue that I'm facing right now is that the diff loss and dann loss are zeros from the first epoch and stay constant for the whole period of training. The only thing that I changed in the code is to add another image transformation for the mnist_m since mnist and mnist_m images have different sizes.

Any help is appreciated!
Thank you!

raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly

Hi,thx for your code, it is a good job,but i have meet some problems when i begin to train it.Hope you can give some advices to me
thanks ~
And, there are my specific problems as follows:
#my training environment is python:3.7.9 with pytorch 1.7.1 and cudnn 11.0.
I learn your training environment is python2 with pytorch 0.4.0,so i revised some code like format of print to python3,
and to solve the alignment problem of shape of mnist and mnist-m ,i revised code of transfrom part to align it.

After those little revises , the code get work. But after one epoch, and raising this prolems as follows:

File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 885, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 17440, 2128, 16900, 13076, 17672, 15916) exited unexpectedly

1621784346(1)
1621784428(1)

To solve this problem, i set the numwork =0 or 1,and revise the value of batch size from 64 to 32,16,8, even 1,but it still raise this problem,after first one epoch. i don't know how to solve it right now,may you give me some helps? thx !!

Question about the Sign of SIMSE Loss

Hi, I was reading the DSN paper and saw you codes here. Nice codes!

But here I was wondering whether the loss codes in train.py's line 189 & 245 should be loss -= source_simse and loss -= target_simse. To confirm this, I read the original scale-invariant mean squared error paper, especially the eqn. 4 and made this conclusion.

I thought if so, the acc can be higher, and get close to DSN's result~

Question about the accuracy of the result

Hi,really thank you for your code,the code is just what I want.I want to ask you about the accuracy of the result.I do not know why I just get about 73%,I didn't change any parameters in the code,everything is just default.
So can you tell me what is the problem about this? Many thanks~

Question about ReverseLayerF

Hi, it is very cool for your code, but I don't understand the 'ReverseLayerF.apply(shared_code,p)' you used in model_compat.py, what role does it play?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.