GithubHelp home page GithubHelp logo

ylsung / pytorch-adversarial-training Goto Github PK

View Code? Open in Web Editor NEW
234.0 5.0 66.0 1.29 MB

PyTorch-1.0 implementation for the adversarial training on MNIST/CIFAR-10 and visualization on robustness classifier.

Python 99.68% Shell 0.32%
adversarial-training visualization mnist-classification cifar10-classification

pytorch-adversarial-training's People

Contributors

ylsung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pytorch-adversarial-training's Issues

Question about Maximum/Minimum value of the pixels.

Hi, thanks for the good work!
I'm tring to use the attack for my own purpose.
It seems that no normalization, such as transforms.Normalize(), was used, so the input ranges from 0-1.
https://github.com/louis2889184/pytorch-adversarial-training/blob/1103fe300dc08f740b6870aebdd40a87d5690a45/cifar-10/main.py#L206-L210
As far as I know, it's comon pratice to normalize a tensor image with given mean and standard deviation. Then the input would have bigger range.
If so, when
https://github.com/louis2889184/pytorch-adversarial-training/blob/1103fe300dc08f740b6870aebdd40a87d5690a45/cifar-10/src/attack/fast_gradient_sign_untargeted.py#L113
the perturbated images are total different. In my case, the attack destoryed the training process and the model went crazy actually.
My question is what should I do to solve it?
Besides, any other changes I should make if the attack is to be used for general purposes?

More information about the updated checkpoint of pgd trained Madry's model on cifar-10

Hi, first thanks for your great work!

I wanna know more information about the updated checkpoint of pgd trained Madry's model on cifar-10. Was this checkpoint stored when the whole 76000 iterations were down? I ran PGD-20 attack to your trained model and the accuracy is 50.05% while it's 47.04% reported in the leaderboard from Madrylab's cifar-10 challenge. Is there any possible reason for such a difference?

Thanks for your attention. Looking forward to your reply.

opt.step() in the train function will use the information from the generator

Hi! Thanks for sharing the code!

Got a question here. In the train function, when you do opt.step(), the weights will be updated with both the loss in the main function and the loss in the function where you generate the perturbation, right?
But I don't think that the weights should be updated with the information from the loss in the generator function.

The result of standard training model in cifar-10 is lower than reported in the paper

Hi, thanks for your great work!

I ran the code without adv_train and adv_test in cifar-10 and got the acc about 87%. It is the same with the value reported in the page of this repository, but not the same as the value reported in Madry's paper[1], which is 95.2%.

1590718772(1)

So, I am confused about such results. Especially as reported, the result of l_inf training model got the robustness acc 63.8% v.s. 55.97% in Madry's model. Are they some differences between this implementation and the paper.

Thanks for your attention and looking forward to your reply!

[1] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.

Question about Test adv acc

Hi,

I run the code and test adv acc is 0.18% after training. Does this mean that the model is still not robust after AT?

Thanks,
Xiang

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.