yisenwang / mart Goto Github PK
View Code? Open in Web Editor NEWCode for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"
Code for ICLR2020 "Improving Adversarial Robustness Requires Revisiting Misclassified Examples"
Hi Yisen,
Thank you for your sharing!
The paper https://arxiv.org/pdf/2002.11242.pdf evaluates MART and the reported robust accuracy under C&W drops by ~6% compared to the accuracy under PGD attack, while in your paper the reported accuracy under those two attacks are about the same. I run your code and there's a gap between accuracy under PGD and under other attacks when evaluated using https://github.com/fra31/auto-attack.
Besides, when I replace CE by KL in the inner maximization, the accuracy under PGD drops by ~3% and the gap between PGD and other attacks is down to ~3%. However, the accuracy reported in your paper does not vary much when replacing CE by KL in the inner maximization. Do you have any thoughts on what may go wrong?
Hi, thanks for your promising work.
We have run your code with \beta equals 0
we got the result as follows:
best robust acc:0.5485000014305115, natural acc:0.8273999691009521 @epoch:80
which is much superior to the reported result in figure2(a)@paper [best is almost around 51% in your figure]
Is \beta = 0 equals the figure2(a)@bce(x_adv)?
https://mega.nz/#!xZ8glS6J!MAnE91ND_WyfZ_8mvkuSa2YcA7q-1ehfSm-Q1fxOvvs can not be accessed in China
hi Yisen Wang,
When i test the accuracy under white-box attack , I find that the testset accuracy on MART loss or cross entropy loss is different , So what loss you use when test the white-box accuracy in an article.
Hi, thanks for your work.
Could tell me the attack parameters for FGSM and CW in your experiments about this paper.
Great work!
BYW, I am new to this field, I am curious that why not directly constrain the CE loss on natural samples? It seems that non-classification constraint is used during the training procedure. Could you provide some insight to understand it?
Thank you for your sharing!
I would like to ask a question about the derivation of formulas in the paper. Specifically, I don't know the derivation of the last two steps of formula (7) in the paper.
It seems that 1(h_theta(x_i)=y_i)=1
and 1(h_theta(x_adv_i)!=y_i)=0
, so that we can infer from the penultimate line to the last line, But why?
Hi,
Thanks for making the code public and it is really concise and readable. I'm just wondering how many epochs you train the models since I didn't find it in the paper. Based on the learning rate schedule (decay at 75 and 90 epoch), I guess you are following the TRADES and therefore train for 100 epochs, right? And are ResNets and WideResNets share the same number of training epochs?
Thanks
Your work has performed well on both CIFAR10 and MNIST datasets. I wonder about the performance of your work on CIFAR100. Have you applied MART on CIFAR100?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.