GithubHelp home page GithubHelp logo

ku-milab / mgicnn Goto Github PK

View Code? Open in Web Editor NEW
34.0 34.0 20.0 11.86 MB

Implementation of "Multi-scale Gradual Itegration Convolutional Neural Network for False Positive Reduction in Pulmonary Nodule Detection"

Python 100.00%

mgicnn's People

Contributors

bck8888 avatar junsikchoi avatar wltjr1007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mgicnn's Issues

How to test fixed data

Hello, when I have trained this model, if I need to test it on a specific test set, how should I do it?

MemoryError:patch_dat = np.zeros(shape=(len(candidates), 28, 42, 42), dtype=np.float32)

Hi! I was trying to rerun the code and reproduce the result, After running the code
I got such error:
Traceback (most recent call last):
File "main.py", line 139, in
train_proposed()
File "main.py", line 12, in train_proposed
trn_dat, trn_lbl, tst_dat, tst_lbl = utils.load_fold()
File "/home/fancywu/Desktop/Lung_nodule_detection/Test_code_data/download_code/MGICNN-master/utils.py", line 188, in load_fold
split_fold()
File "/home/fancywu/Desktop/Lung_nodule_detection/Test_code_data/download_code/MGICNN-master/utils.py", line 112, in split_fold
augment_data()
File "/home/fancywu/Desktop/Lung_nodule_detection/Test_code_data/download_code/MGICNN-master/utils.py", line 89, in augment_data
load_raw_data()
File "/home/fancywu/Desktop/Lung_nodule_detection/Test_code_data/download_code/MGICNN-master/utils.py", line 60, in load_raw_data
patch_dat = np.zeros(shape=(len(candidates), 28, 42, 42), dtype=np.float32)
MemoryError
___________________________________________________________________________-
I computed the size of patch_dat, and found it costs too much memory, 138.91505151987076GiB,
I wonder is there any ways to reduce the memory cost?? How can I deal with this problem?

Missing function test_patch_extraction

hi,
Line 34 in CAD_nodule_Detection.py as following,
btm, mid, top = def_Patch_Extract_.test_patch_extraction(DAT_DATA_Path, CT_scans)
however, there is no 'test_patch_extraction' function in def_Patch_Extract_.py, did u forget to add the function ?

mis-match between the paper and the repo.

Hello,

Thank you for sharing your work. I have noticed a mis-match between the paper and the repo:

1-in the paper, you mention using RELU, while in the code you're using LeakyRELU.
2-in the paper, you mention using SGD, while in the code you're using ADAM.
3-in the paper, you mention using a decaying learning rate (by a factor of 2.5% for each epoch), while in the code you're using a static learning rate.

Could you kindly elaborate on which settings to use?

Many thanks :)

How to get the Luna16 datasets?

I'm not join the LUNA16 challenge, Could I get the LUNA16 dataset and the tutorials? and how?
If you see this, tell me the answer please. Thank you!

Mis-matching Total number of candidates

Hello,

after evaluating the predictions file (i.e. METU_VISION_FPRED.csv) using the LUNA16 official script (i.e. noduleCADEvaluationLUNA16.py), I get an output (CADAnalysis.txt) like this :

CADAnalysis.txt


CAD Analysis: METU_VISION_FPRED


Candidate detection results:
True positives: 1128
False positives: 81706
False negatives: 58
True negatives: 0
Total number of candidates: 87794
Total number of nodules: 1186
Ignored candidates on excluded nodules: 4600
Ignored candidates which were double detections on a nodule: 360
Sensitivity: 0.951096121
Average number of candidates per scan: 98.867117117

From what I understand, the the Total number of candidates should be 754,975 according to https://luna16.grand-challenge.org/Evaluation/

Even though METU_VISION_FPRED.csv has 754,975 candidates!! the CADAnalysis.txt still says Total number of candidates: 87794 !!!!!

Looking forward to hearing back from you. Any help is extremely appreciated.

Many thanks.

Any idea why that happens?? Am I missing something?

Incomplete Code and Mismatch CPM value

Hi,
Thanks for sharing your good explained code but your final test part has not cpm computation. I ran it without any changes and within tfv1, python 3.6 env and I evaluate your final results (probs) with Official Luna evaluation script. Unfortunately the best gained cpm is <80%. (It is very far from paper >94.x)
It is two possible condition:
1-There are some mis configuration in the shared repo.(As you can some differences between codes and paper: Optimization algorithm, LR, ...)
2-You compute the cpm with another way. (some exclusion in suids, cands, etc)
You can see my evaluation code in my repo (I shared it with [email protected] and [email protected])
Can you share your cpm evaluation part with me?
([email protected])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.