Christina Aigner's Projects
Adversarial Black box Explainer generating Latent Exemplars
A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
An comparative study on the paper Exploring Generalization in Deep Learning https://papers.nips.cc/paper/7176-exploring-generalization-in-deep-learning.pdf
This is a unified interpretability framework for pytorch deep neural networks on visual recognition tasks, consisting of various visualization techniques and uncertainty measures. Please use the latest release of our gitLab version.