GithubHelp home page GithubHelp logo

ekagra-ranjan / ae-cnn Goto Github PK

View Code? Open in Web Editor NEW
45.0 4.0 17.0 10.16 MB

ICVGIP' 18 Oral Paper - Classification of thoracic diseases on ChestX-Ray14 dataset

Python 100.00%
deep-learning medical-imaging chestxray14 pytorch ae-cnn autoencoder cnn chest-xrays chest-xray-images chest-x-ray8

ae-cnn's Introduction

Hi there ๐Ÿ‘‹

Checkout my Professional Portfolio to know more about my work.

ae-cnn's People

Contributors

ekagra-ranjan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ae-cnn's Issues

Trouble reproducing baseline results

Hi there! First of all, I think the paper is really interesting. Second, I appreciate that this is one of the only open-sourced repos I can find that provides training code for the NIH dataset and uses the official train-test split.

I am trying to independently reproduce the DenseNet121 baseline you provide in the paper (over 0.82 AUROC with augmentation), and not really coming close. I am using the exact same data splits as you, but have written my own training and model definition scripts; I am still using pre-trained ImageNet weights provided by torchvision and, as far as I can tell, all the same hyperparameters and preprocessing steps as you have.

I've tried unweighted cross-entropy, class-weighted cross-entropy, heavy augmentation (including elastic deformations, cutout, etc.), light augmentation (e.g., just random crop to 224 and horizontal flip), using 14 classes, using 15 classes (considering "No Finding" to be another output class). No matter what I've tried, I see rather quick convergence and overfitting (best validation loss achieved no later than epoch 7), and the highest validation AUROC I've seen is 0.814. This is considerably lower than the 0.839 validation AUROC you report in Table 2 for BL5-DenseNet121. The absolute best test set results I've achieved with a DenseNet121 architecture is 0.807 AUROC with 8x test-time augmentation.

I'm pretty puzzled by this because I don't think random variation in training or minor implementation differences should cause a >0.015-point drop in AUROC... Of course there are a million potential sources for this difference, but maybe you can help me pinpoint it.

For your final models, did you use 14 or 15 classes? Also, would you be able to provide any sort of training/learning curve showing loss or AUROC vs. # epochs? I am suspicious of how quickly my baseline models are converging, and am wondering how my training trajectories compare to yours.

Trained weights

Hi,
I couldn't find the trained weights. Can you share it please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.