Inspired by this repo and ML Writing Month. Questions and discussions are most welcome!
TNNLS 2019
Adversarial Examples: Attacks and Defenses for Deep LearningIEEE ACCESS 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey- Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
- A Study of Black Box Adversarial Attacks in Computer Vision
- Adversarial Examples in Modern Machine Learning: A Review
ICLR 2013
ATTACK
Evasion Attacks against Machine Learning at Test TimeICLR 2014
L-BFGS
Intriguing properties of neural networksICLR 2015
FGSM
Explaining and Harnessing Adversarial ExamplesEuroS&P 2016
ATTACK
The limitations of deep learning in adversarial settingsCVPR 2016
ATTACK
DeepfoolSP 2016
CW Attack
C&W Towards evaluating the robustness of neural networksArxiv 2016
Transferability
ATTACK
Transferability in machine learning: from phenomena to black-box attacks using adversarial samplesCVPR 2019
Transferability
Feature Space
Feature Space Perturbations Yield More Transferable Adversarial ExamplesICLR 2017
Transferability
Delving into Transferable Adversarial Examples and Black-box AttacksICLR 2019
Adversarial Training
The Limitations of Adversarial Training and the Blind-Spot AttackCVPR 2017
Universal
Universal Adversarial PerturbationsICLR 2018
GAN
Natural
Generating Natural Adversarial ExamplesICLR 2019
Theory
Are adversarial examples inevitable? ๐ญIEEE TEC 2019
One-Pixel
One pixel attack for fooling deep neural networksARXIV 2019
ATTACK
Generalizable Adversarial Attacks Using Generative ModelsICML 2019
DISTRIBUTION
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks๐ญARXIV 2019
CGAN
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image EditingNeurlPS 2018
AC-GAN
WGAN
Constructing Unrestricted Adversarial Examples with Generative ModelsIJCAI 2018
GAN
Generating Adversarial Examples with Adversarial NetworksCVPR 2018
GENERATIVES
UNIVERSAL
Generative Adversarial PerturbationsAAAI 2018
ATN
Learning to Attack: Adversarial transformation networksCVPR 2019
Rob-GAN
Rob-GAN: Generator, Discriminator, and Adversarial AttackerS&P 2018
Learning Universal Adversarial Perturbations with Generative ModelsARXIV 2019
CYCLEADVGAN
Cycle-Consistent Adversarial {GAN:} the integration of adversarial attack and defenseARXIV 2019
Generating Realistic Unrestricted Adversarial Inputs using Dual-Objective {GAN} Training ๐ญICLR 2018
Spatially Transformed Adversarial ExamplesICCV 2019
Sparse and Imperceivable Adversarial Attacks๐ญARXIV 2019
Perturbations are not Enough: Generating Adversarial Examples with Spatial DistortionsARXIV 2019
Joint Adversarial Training: Incorporating both Spatial and Pixel AttacksICLR 2020
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking๐ญCVPR 2018
Robust physical-world attacks on deep learning visual classificationICCV 2017
Adversarial Examples for Semantic Segmentation and Object DetectionARXIV 2017
Adversarial Examples that Fool DetectorsCVPR 2017
A-Fast-RCNN: Hard Positive Generation via Adversary for Object DetectionIJCAI 2019
Transferable Adversarial Attacks for Image and Video Object DetectionTPAMI 2019
Generalizable Data-Free Objective for Crafting Universal Adversarial PerturbationsCVPR 2019
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and DefensesICCV 2017
Adversarial Examples Detection in Deep Networks with Convolutional Filter StatisticsICLR 2019
[Adversarial Attacks on Graph Neural Networks via Meta Learning]ECCV 2018
[Characterizing adversarial examples based on spatial consistency information for semantic segmentation]ICCV 2017
UNIVERSAL
[Universal Adversarial Perturbations Against Semantic Image Segmentation]CVPR 2018
UNIVERSAL
[Art of Singular Vectors and Universal Adversarial Perturbations]AIS 2017
[Adversarial examples are not easily detected: Bypassing ten detection methods]ARXIV 2019
[SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations]CVPR 2019
[SparseFool: a few pixels make a big difference]ARXIV 2018
[Adversarial Spheres]
Arxiv 2017
Detection
Detecting adversarial samples from artifactsICLR 2017
Detection
On Detecting Adversarial Perturbations ๐ญICLR 2018
DEFENSE-GAN
Defense-{GAN}: Protecting Classifiers Against Adversarial Attacks Using Generative ModelsCVPR 2019
Retrieval-Augmented Convolutional Neural Networks against Adversarial ExamplesCVPR 2019
Feature Denoising for Improving Adversarial RobustnessNEURIPS 2019
A New Defense Against Adversarial Images: Turning a Weakness into a StrengthICLR 2018
Ensemble Adversarial Training: Attacks and DefencesCVPR 2018
Defense Against Universal Adversarial PerturbationsCVPR 2018
Deflecting Adversarial Attacks With Pixel DeflectionICLR 2020
Jacobian Adversarially Regularized Networks for RobustnessCVPR 2020
What it Thinks is Important is Important: Robustness Transfers through Input GradientsTPAMI 2018
Virtual adversarial training: a regularization method for supervised and semi-supervised learning ๐ญNIPS 2019
Adversarial Training and Robustness for Multiple PerturbationsNIPS 2019
Adversarial Robustness through Local LinearizationICLR 2020
Adversarially Robust Representations with Smooth Encoders ๐ญICML 2019
Interpreting Adversarially Trained Convolutional Neural NetworksICLR 2019
Robustness May Be at Odds with Accuracy๐ญIJCAI 2019
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet LossICML 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise๐ญARXIV 2020
Heat and Blur: An Effective and Fast Defense Against Adversarial ExamplesARXIV 2018
Adversarial Logit PairingICML 2019
On the Connection Between Adversarial Robustness and Saliency Map InterpretabilityNIPS 2019
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training๐ญNIPS 2016
Robustness of classifiers: from adversarial to random noise ๐ญCVPR 2018
Defense Against Adversarial Attacks Using High-Level Representation Guided DenoiserICML 2019
Using Pre-Training Can Improve Model Robustness and UncertaintyICML 2020
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive InferenceICCV 2017
[SafetyNet: Detecting and Rejecting Adversarial Examples Robustly]
ICCV 2017
CVAE-GAN
CVAE-GAN: Fine-Grained Image Generation Through Asymmetric TrainingICML 2016
VAE-GAN
Autoencoding beyond pixels using a learned similarity metricARXIV 2019
DATASET
Natural Adversarial ExamplesICML 2017
AC-GAN
Conditional Image Synthesis with Auxiliary Classifier {GAN}sICCV 2019
SinGAN
SinGAN: Learning a Generative Model From a Single Natural ImageICLR 2020
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural NetworksICLR 2020
Pay Attention to Features, Transfer Learn Faster CNNsICLR 2020
On Robustness of Neural Ordinary Differential EquationsICCV 2019
Real Image Denoising With Feature AttentionICLR 2018
Multi-Scale Dense Networks for Resource Efficient Image ClassificationARXIV 2019
Rethinking Data Augmentation: Self-Supervision and Self-DistillationCVPR 2014
[Rich feature hierarchies for accurate object detection and semantic segmentation]ICLR 2018
[Spectral Normalization for Generative Adversarial Networks]NIPS 2018
[MetaGAN: An Adversarial Approach to Few-Shot Learning]ARXIV 2019
[Breaking the cycle -- Colleagues are all you need]ARXIV 2019
[LOGAN: Latent Optimisation for Generative Adversarial Networks]