Checkout my Professional Portfolio to know more about my work.
ekagra-ranjan / ae-cnn Goto Github PK
View Code? Open in Web Editor NEWICVGIP' 18 Oral Paper - Classification of thoracic diseases on ChestX-Ray14 dataset
ICVGIP' 18 Oral Paper - Classification of thoracic diseases on ChestX-Ray14 dataset
Checkout my Professional Portfolio to know more about my work.
Hi there! First of all, I think the paper is really interesting. Second, I appreciate that this is one of the only open-sourced repos I can find that provides training code for the NIH dataset and uses the official train-test split.
I am trying to independently reproduce the DenseNet121 baseline you provide in the paper (over 0.82 AUROC with augmentation), and not really coming close. I am using the exact same data splits as you, but have written my own training and model definition scripts; I am still using pre-trained ImageNet weights provided by torchvision and, as far as I can tell, all the same hyperparameters and preprocessing steps as you have.
I've tried unweighted cross-entropy, class-weighted cross-entropy, heavy augmentation (including elastic deformations, cutout, etc.), light augmentation (e.g., just random crop to 224 and horizontal flip), using 14 classes, using 15 classes (considering "No Finding" to be another output class). No matter what I've tried, I see rather quick convergence and overfitting (best validation loss achieved no later than epoch 7), and the highest validation AUROC I've seen is 0.814. This is considerably lower than the 0.839 validation AUROC you report in Table 2 for BL5-DenseNet121. The absolute best test set results I've achieved with a DenseNet121 architecture is 0.807 AUROC with 8x test-time augmentation.
I'm pretty puzzled by this because I don't think random variation in training or minor implementation differences should cause a >0.015-point drop in AUROC... Of course there are a million potential sources for this difference, but maybe you can help me pinpoint it.
For your final models, did you use 14 or 15 classes? Also, would you be able to provide any sort of training/learning curve showing loss or AUROC vs. # epochs? I am suspicious of how quickly my baseline models are converging, and am wondering how my training trajectories compare to yours.
Sir, can you provide model weight file?
Hi,
It would be great if you clarify the following operation in test dataset:
AE-CNN/ae-cnn/densenet121/ChexnetTrainer.py
Line 136 in de16e25
Best
Hello,
It would be great if you could open your code.
Cheers,
Hi,
I couldn't find the trained weights. Can you share it please?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.