A curated list of awesome adversarial machine learning resources, inspired by awesome-computer-vision.
- Breaking Linear Classifiers on ImageNet, A. Karpathy et al.
- Breaking things is easy, N. Papernot & I. Goodfellow et al.
- Intriguing properties of neural networks, C. Szegedy et al., arxiv 2014
- Explaining and Harnessing Adversarial Examples, I. Goodfellow et al., ICLR 2015
- Adversarial Examples In The Physical World
- Adversarial Examples For Generative Models
- Distributional Smoothing with Virtual Adversarial Training, T. Miyato et al., ICLR 2016
- Adversarial Training Methods for Semi-Supervised Text Classification
- The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
- Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., arxiv 2016
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, N. Papernot et al., arxiv 2016
- Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, A. Nguyen et al., CVPR 2015
- DeepFool: a simple and accurate method to fool deep neural networks
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, N. Papernot et al., SSP 2016
- [Towards Evaluating the Robustness of Neural Networks]
- [Delving into Transferable Adversarial Examples and Black-box Attacks]
- [SoK: Towards the Science of Security and Privacy in Machine Learning]
- [Learning Adversary-Resistant Deep Neural Networks]
- Do Statistical Models Understand the World?, I. Goodfellow, 2015
License
To the extent possible under law, Yen-Chen Lin has waived all copyright and related or neighboring rights to this work.