GithubHelp home page GithubHelp logo

johnleehit / brats2019 Goto Github PK

View Code? Open in Web Editor NEW
92.0 6.0 24.0 15.77 MB

[BrainLes2019] Multi-step cascaded network for brain tumor segmentations (tensorflow)

Home Page: https://arxiv.org/abs/1908.05887

License: MIT License

Python 100.00%
segmentation tensorflow brain-tumor-segmentation state-of-the-art cascaded-cnn

brats2019's Introduction

Hi there 👋

  • 🔭 I'm a final-year PhD at Harbin Institute of Technology and mainly focus on deep learning-based medical image segmentation.
  • 🌱 I’m currently learning the uncertainties in medical image segmentation tasks, and learning with ambiguous labels.

Anurag's GitHub stats

brats2019's People

Contributors

dependabot[bot] avatar johnleehit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

brats2019's Issues

after tranning 9000 epochs the matrice of dice is low(specially for ET)

Hello!
All I changed is the "parameters.ini" , including phase \ traindata dir \ testdata dir .
The traindata dir is just the unzip file of " MICCAI_BraTS_2019_Data_Training "
Because I don't get the validation labels so i changed the testdatadir exacltly same as traindatadir.

The "parameters.ini" as follow

[param_setting]
; train / test / gen_map
phase = train
; batch size in training, 1
batch_size = 1
; batch size in training, 96
inputI_size = 96
; input channel number, 1
inputI_chn = 2
; output image size, 96
outputI_size = 96
; output channel, 8
output_chn = 2
; label rename map
rename_map = 0, 1, 2, 4
; volume resize ratio
resize_r = 1
; training data directory  traindata_dir =  /home/lixiangyu/Dataset/BraTS2019 /home/server/home/Dataset/BraTS2019
traindata_dir =  /data/yangjie/MICCAI_BraTS_2019_Data_Training
; checkpoint directory
chkpoint_dir = outcome/checkpoint
; learning rate of Adam, 1e-3 ,At first :0.001  when reach plateu decrease to 0.0005
learning_rate =0.001
; momentum term of Adam, 0.5
beta1 = 0.5
; training epoch, 10000
epoch =20000
; model name
model_name = ds_ft_hybrid_4ct.model
; model save interval
save_intval = 1000
; testing data  directory /home/lixiangyu/Dataset/mix/test /home/server/home/Dataset/mix/test
testdata_dir =  /data/yangjie/MICCAI_BraTS_2019_Data_Training
; labeling output directory
labeling_dir = outcome/label
; cube overlap factor: training:1 test:4
ovlp_ita =1

step=9000
Stages=6
Blocks=1
Columns=3
; Hard negative mining parameters
fg_ratio = 2
bg_ratio = 32

Then I just train the model , Run main.py in the command line .
After tranning 9000 epochs , the matrice of dice is low(specially for ET, about 0.1) .
the train.log file as follow

Configurations:
phase                          train
batch_size                     1
inputI_size                    96
inputI_chn                     2
outputI_size                   96
output_chn                     2
rename_map                     0, 1, 2, 4
resize_r                       1.0
traindata_dir                  /data/yangjie/MICCAI_BraTS_2019_Data_Training
chkpoint_dir                   outcome/checkpoint
learning_rate                  0.001
beta1                          0.5
epoch                          20000
model_name                     ds_ft_hybrid_4ct.model
save_intval                    1000
testdata_dir                   /data/yangjie/MICCAI_BraTS_2019_Data_Training
labeling_dir                   outcome/label
ovlp_ita                       1
step                           9000
Stages                         6
Blocks                         1
Columns                        3
fg_ratio                       2.0
bg_ratio                       32.0
focal_loss_flag                False
[WT, TC, ET]:  average dice: [0.441, 0.224, 0.021]  mean average dice : 0.22866666666666668 average sensitivity: [0.962, 0.951, 0.082]  mean average sensitivity : 0.6649999999999999
[WT, TC, ET]:  average dice: [0.375, 0.219, 0.034]  mean average dice : 0.20933333333333334 average sensitivity: [0.978, 0.944, 0.269]  mean average sensitivity : 0.7303333333333333
[WT, TC, ET]:  average dice: [0.5, 0.263, 0.025]  mean average dice : 0.26266666666666666 average sensitivity: [0.969, 0.972, 0.118]  mean average sensitivity : 0.6863333333333332
[WT, TC, ET]:  average dice: [0.434, 0.265, 0.052]  mean average dice : 0.25033333333333335 average sensitivity: [0.978, 0.966, 0.229]  mean average sensitivity : 0.7243333333333334
[WT, TC, ET]:  average dice: [0.479, 0.298, 0.04]  mean average dice : 0.2723333333333333 average sensitivity: [0.962, 0.937, 0.189]  mean average sensitivity : 0.6960000000000001
[WT, TC, ET]:  average dice: [0.556, 0.332, 0.046]  mean average dice : 0.3113333333333334 average sensitivity: [0.971, 0.951, 0.175]  mean average sensitivity : 0.699
[WT, TC, ET]:  average dice: [0.542, 0.282, 0.174]  mean average dice : 0.33266666666666667 average sensitivity: [0.964, 0.975, 0.258]  mean average sensitivity : 0.7323333333333334
[WT, TC, ET]:  average dice: [0.586, 0.328, 0.096]  mean average dice : 0.33666666666666667 average sensitivity: [0.98, 0.983, 0.322]  mean average sensitivity : 0.7616666666666667
[WT, TC, ET]:  average dice: [0.407, 0.245, 0.062]  mean average dice : 0.238 average sensitivity: [0.975, 0.974, 0.311]  mean average sensitivity : 0.7533333333333333
[WT, TC, ET]:  average dice: [0.516, 0.274, 0.07]  mean average dice : 0.2866666666666667 average sensitivity: [0.984, 0.989, 0.212]  mean average sensitivity : 0.7283333333333334
[WT, TC, ET]:  average dice: [0.535, 0.31, 0.065]  mean average dice : 0.3033333333333333 average sensitivity: [0.985, 0.968, 0.165]  mean average sensitivity : 0.706

the test.log file as follow

[WT, TC, ET]:  average dice: [0.513, 0.228, 0.041]  mean average dice : 0.26066666666666666 average sensitivity: [0.967, 0.979, 0.075]  mean average sensitivity : 0.6736666666666666
[WT, TC, ET]:  average dice: [0.445, 0.233, 0.087]  mean average dice : 0.255 average sensitivity: [0.983, 0.976, 0.211]  mean average sensitivity : 0.7233333333333333
[WT, TC, ET]:  average dice: [0.571, 0.274, 0.082]  mean average dice : 0.309 average sensitivity: [0.974, 0.985, 0.138]  mean average sensitivity : 0.699
[WT, TC, ET]:  average dice: [0.502, 0.285, 0.176]  mean average dice : 0.32099999999999995 average sensitivity: [0.981, 0.981, 0.267]  mean average sensitivity : 0.743
[WT, TC, ET]:  average dice: [0.57, 0.354, 0.165]  mean average dice : 0.363 average sensitivity: [0.96, 0.968, 0.253]  mean average sensitivity : 0.727
[WT, TC, ET]:  average dice: [0.626, 0.375, 0.137]  mean average dice : 0.3793333333333333 average sensitivity: [0.977, 0.976, 0.166]  mean average sensitivity : 0.7063333333333333
[WT, TC, ET]:  average dice: [0.629, 0.304, 0.292]  mean average dice : 0.4083333333333334 average sensitivity: [0.966, 0.988, 0.294]  mean average sensitivity : 0.7493333333333333
[WT, TC, ET]:  average dice: [0.656, 0.345, 0.274]  mean average dice : 0.425 average sensitivity: [0.984, 0.992, 0.358]  mean average sensitivity : 0.778
[WT, TC, ET]:  average dice: [0.49, 0.291, 0.198]  mean average dice : 0.3263333333333333 average sensitivity: [0.979, 0.983, 0.379]  mean average sensitivity : 0.7803333333333334
[WT, TC, ET]:  average dice: [0.593, 0.296, 0.19]  mean average dice : 0.35966666666666663 average sensitivity: [0.984, 0.995, 0.254]  mean average sensitivity : 0.7443333333333334
[WT, TC, ET]:  average dice: [0.604, 0.343, 0.177]  mean average dice : 0.3746666666666667 average sensitivity: [0.989, 0.988, 0.246]  mean average sensitivity : 0.741

Besides, when i am traning, the loss fluctuates.
Do you konw the reason ? I would be appreciated if you could reply.
Thanks a lot.
Best wishes!

model

Hi, In the code you provided, I did not find the code to save the trained model. Can you help me?
Best wishes

Cross validation may be better

Hi sir, your code is very good. It looks like that you use the first 50 patients in training data set to validate the outcome of the model after many epoch training. I think maybe cross validation can be used, do you agree, thank you very much!

why is there a problem "AttributeError: 'BatchGenerator' object has no attribute 'next'?

Traceback (most recent call last):
File "/home/xwl/Brats2019/src/main.py", line 72, in
tf.app.run()
File "/home/anaconda3/envs/tf112/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/xwl/Brats2019/src/main.py", line 47, in main
model.train() # training process
File "/home/xwl/Brats2019/src/operations.py", line 279, in train
batch_img, batch_img2, batch_label, batch_label_stage2, batch_label_stage3 = next(data_generator)
File "/home/anaconda3/envs/tf112/lib/python3.6/site-packages/keras_preprocessing/image.py", line 1526, in next
return self.next(*args, **kwargs)
AttributeError: 'BatchGenerator' object has no attribute 'next'
////////////////////////////////////////////////////////////////////////////////////////////////////////////
I have tried to change the codes from
"batch_img, batch_img2, batch_label, batch_label_stage2, batch_label_stage3 = next(data_generator)"
to
batch_img, batch_img2, batch_label, batch_label_stage2, batch_label_stage3 = data_generator.next()
But This error still exists!

        batch_img, batch_img2, batch_label, batch_label_stage2, batch_label_stage3 = next(data_generator)

Index error: list index out of range

good morning sir,

I tried to execute your code using the BRATS 2019 Database, while executing the code I got the following error. kindly, resolve this issue.
image,

Thank you!!

In model unet_resnet, why do expand_dims for input_pred_softmax?

In model unet_rest, you do expand_dims for input_pred_softmax, this change input_pred_softmax's shape from (1,96,96,96,2),if the input channel is 2, then to (1,96,96,96,1).
So, input_attention = forground_input_pred * input_img, input_attention get its shape(1,96,96,96,1), the input_attention's channel is 1, but the input channel is 2, not equal. So, why do expand_dims? How it influence the segmentation result?

You load pre-weights only for the first mode named unet,did you?

In your code file "oprations.py", you extract some layers for fine-tuning, and I found these layers only from the first mode unet. Does it means just do fine-tuning for unet mode , but not do for unet_resnet mode?

New learner of tensorflow, I don't really understand fine-tuning in tensorflow, thank you very much!

Why not do N4BiasFieldCorrection?

Your code has a function named N4BiasFieldCorrection, but actually you didn't use this function for your data process, why not do bias field correction?

Must do bias field correction for data process? Thank you very much!

Still a big gap with your best result.(Training average dice: [0.787, 0.546, 0.446] vs [0.915, 0.83, 0.791])

Hello, Sorry to bother you. I try your advise to train with only HGG data. The result truely improved from the last one. Average dice increased by 10 points. But I cannot reappear your best result and there is still a big gap.
After tranning 20000 epochs , the train.log file as follow

[WT, TC, ET]:  average dice: [0.592, 0.323, 0.082]  mean average dice : 0.3323333333333333 average sensitivity: [0.969, 0.973, 0.085]  mean average sensitivity : 0.6756666666666667
[WT, TC, ET]:  average dice: [0.422, 0.365, 0.17]  mean average dice : 0.319 average sensitivity: [0.982, 0.965, 0.247]  mean average sensitivity : 0.7313333333333333
[WT, TC, ET]:  average dice: [0.692, 0.433, 0.242]  mean average dice : 0.45566666666666666 average sensitivity: [0.969, 0.969, 0.246]  mean average sensitivity : 0.7280000000000001
[WT, TC, ET]:  average dice: [0.45, 0.344, 0.014]  mean average dice : 0.26933333333333337 average sensitivity: [0.991, 0.979, 0.022]  mean average sensitivity : 0.664
[WT, TC, ET]:  average dice: [0.682, 0.407, 0.341]  mean average dice : 0.4766666666666666 average sensitivity: [0.979, 0.984, 0.413]  mean average sensitivity : 0.7919999999999999
[WT, TC, ET]:  average dice: [0.598, 0.389, 0.283]  mean average dice : 0.42333333333333334 average sensitivity: [0.983, 0.984, 0.322]  mean average sensitivity : 0.763
[WT, TC, ET]:  average dice: [0.679, 0.438, 0.252]  mean average dice : 0.4563333333333333 average sensitivity: [0.983, 0.977, 0.254]  mean average sensitivity : 0.738
[WT, TC, ET]:  average dice: [0.678, 0.439, 0.262]  mean average dice : 0.45966666666666667 average sensitivity: [0.985, 0.982, 0.274]  mean average sensitivity : 0.747
[WT, TC, ET]:  average dice: [0.691, 0.52, 0.097]  mean average dice : 0.43599999999999994 average sensitivity: [0.98, 0.978, 0.066]  mean average sensitivity : 0.6746666666666666
[WT, TC, ET]:  average dice: [0.634, 0.314, 0.349]  mean average dice : 0.4323333333333333 average sensitivity: [0.993, 0.998, 0.47]  mean average sensitivity : 0.8203333333333335
[WT, TC, ET]:  average dice: [0.675, 0.473, 0.034]  mean average dice : 0.3940000000000001 average sensitivity: [0.987, 0.991, 0.022]  mean average sensitivity : 0.6666666666666666
[WT, TC, ET]:  average dice: [0.673, 0.499, 0.39]  mean average dice : 0.5206666666666667 average sensitivity: [0.974, 0.98, 0.406]  mean average sensitivity : 0.7866666666666666
[WT, TC, ET]:  average dice: [0.678, 0.423, 0.261]  mean average dice : 0.454 average sensitivity: [0.988, 0.994, 0.307]  mean average sensitivity : 0.763
[WT, TC, ET]:  average dice: [0.769, 0.513, 0.349]  mean average dice : 0.5436666666666666 average sensitivity: [0.983, 0.992, 0.346]  mean average sensitivity : 0.7736666666666667
[WT, TC, ET]:  average dice: [0.717, 0.501, 0.336]  mean average dice : 0.518 average sensitivity: [0.989, 0.99, 0.314]  mean average sensitivity : 0.7643333333333334
[WT, TC, ET]:  average dice: [0.787, 0.546, 0.446]  mean average dice : 0.5930000000000001 average sensitivity: [0.982, 0.99, 0.41]  mean average sensitivity : 0.794
[WT, TC, ET]:  average dice: [0.671, 0.572, 0.389]  mean average dice : 0.5439999999999999 average sensitivity: [0.982, 0.978, 0.364]  mean average sensitivity : 0.7746666666666666
[WT, TC, ET]:  average dice: [0.745, 0.573, 0.276]  mean average dice : 0.5313333333333333 average sensitivity: [0.982, 0.986, 0.223]  mean average sensitivity : 0.7303333333333333
[WT, TC, ET]:  average dice: [0.783, 0.598, 0.336]  mean average dice : 0.5723333333333334 average sensitivity: [0.983, 0.989, 0.277]  mean average sensitivity : 0.7496666666666667
[WT, TC, ET]:  average dice: [0.76, 0.642, 0.379]  mean average dice : 0.5936666666666667 average sensitivity: [0.985, 0.98, 0.33]  mean average sensitivity : 0.765

As you can see, the best result is [WT, TC, ET]: average dice: [0.787, 0.546, 0.446]
Do you think it might be the problem with the parameters that you set on the "parameters.ini" ?Or is there any other augment for trainning data.
Because of the limited computing resources, I didn't do more experiments.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.