GithubHelp home page GithubHelp logo

Comments (24)

JohnleeHIT avatar JohnleeHIT commented on June 11, 2024 3

Since some of you guys can not get the result, I will retrain the code again these days, Please wiat for my updatas, thanks for your patience.

from brats2019.

Lightning980729 avatar Lightning980729 commented on June 11, 2024

Could I ask for the checkpoints file to do some test if you have saved these files? I will be appreciate if you could send me the download link. Thanks.

from brats2019.

JohnleeHIT avatar JohnleeHIT commented on June 11, 2024

May be you can decrease the learning rate when the dice did't improve, and see how it goes,
Sorry for that request of chckpoint file, its an result of a team, so it not appropriate to release right now.
I believe you will get the reuslt from the code.

from brats2019.

angledick avatar angledick commented on June 11, 2024

Have you reached the author's results now?

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

I also didn't get the result as the author, so what's wrong with it? How to improve?

from brats2019.

Lightning980729 avatar Lightning980729 commented on June 11, 2024

No I cannot get the result as well. And I still don't know the reason.

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

Restriction of GPU resource, we must use patch volume, can we resize whole volume to a small size, then training with this? Can dice be promoted?

from brats2019.

JohnleeHIT avatar JohnleeHIT commented on June 11, 2024

You'd better no to do that, resize the volume means resize the label at the same time, it will cause a lot of problems

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

You'd better no to do that, resize the volume means resize the label at the same time, it will cause a lot of problems

Yes,you are right, this task is multi-class classification, it's different from binary classification. I even did this method for binary classification, it really can improve dice.

from brats2019.

JohnleeHIT avatar JohnleeHIT commented on June 11, 2024

@Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the result i show is the result of model ensemble, so the result of a single model would be slightly inferior to the result.

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

@Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the result i show is the result of model ensemble, so the result of a single model would be slightly inferior to the result.

Hello,I have read your new code, I found only one function changed named softmax_weighted_loss, you reuse one line code: gt = produce_mask_background(gt, softmaxpred, self.fg_ratio,self.bg_ratio).
Except for this one, no others changes. I want to make sure with you!

And I have a question, in file parameters.ini about the learning rate. At first, lr=0.001, when reach plateu decrease to 0.0005. Where to do this in your code?
I found in function conv3d and Deconv3d in models.py, both do "slim.l2_regularizer(0.0005)", does this change the learning rate from 0.001 to 0.0005?

Thank you very much! I am a new learner, I'm sorry for so many questions, but your code is great, especially your model combination logic!

from brats2019.

JohnleeHIT avatar JohnleeHIT commented on June 11, 2024

@Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the result i show is the result of model ensemble, so the result of a single model would be slightly inferior to the result.

Hello,I have read your new code, I found only one function changed named softmax_weighted_loss, you reuse one line code: gt = produce_mask_background(gt, softmaxpred, self.fg_ratio,self.bg_ratio).
Except for this one, no others changes. I want to make sure with you!

And I have a question, in file parameters.ini about the learning rate. At first, lr=0.001, when reach plateu decrease to 0.0005. Where to do this in your code?
I found in function conv3d and Deconv3d in models.py, both do "slim.l2_regularizer(0.0005)", does this change the learning rate from 0.001 to 0.0005?

Thank you very much! I am a new learner, I'm sorry for so many questions, but your code is great, especially your model combination logic!

You'd better git clone the latest version, several places have changed. For learning rate, I Just change the learning rate in the config file when the dice did't incease.

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

I'm sure the only worked change is in the function softmax_weighted_loss. Some other changes like: fractal_net in models.py, and self.is_global_path in operations.py are all commented ones.

from brats2019.

zhangjing1170 avatar zhangjing1170 commented on June 11, 2024

Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!

When only use HGG for training, I can get almost the same dice of WT and TC, but ET is lower, ET dice is 0.4.

from brats2019.

Lightning980729 avatar Lightning980729 commented on June 11, 2024

Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!

When only use HGG for training, I can get almost the same dice of WT and TC, but ET is lower, ET dice is 0.4.

I just git clone the latest version, so I am still training. Now I can't answer your question. It takes time. I will let you known when I finish it.

from brats2019.

siyuanSsun avatar siyuanSsun commented on June 11, 2024

@JohnleeHIT
After about 30000 epoches of training, here is my result in train.log file:
image
As you can see: the WT part of the dice is quite close to the state-of-art, but the TC and ET parts have a long way to go. I've changed the learning-rate when dice does not improve.

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

@siyuanSsun
Did you only use HGG for training? what's the learning rate?

from brats2019.

siyuanSsun avatar siyuanSsun commented on June 11, 2024

@siyuanSsun
Did you only use HGG for training? what's the learning rate?

@zwlshine
I used both HGG and LGG for training. However I randomly chose part of the data as training dataset and the rest as test dataset. The first 20000 epoches I used 0.0005 as learning rate and then changed it to 0.0001 for the rest of training.

from brats2019.

zwlshine avatar zwlshine commented on June 11, 2024

@siyuanSsun
Did you only use HGG for training? what's the learning rate?

@zwlshine
I used both HGG and LGG for training. However I randomly chose part of the data as training dataset and the rest as test dataset. The first 20000 epoches I used 0.0005 as learning rate and then changed it to 0.0001 for the rest of training.

Thank you very much!
About change learning rate, do you mean when reach the 20000 epoch, you stop the process, then change learning_rate in parameters.ini file, then load the 20000 epoch's checkpoint as pre_weight for the rest of training?

from brats2019.

siyuanSsun avatar siyuanSsun commented on June 11, 2024

@siyuanSsun
Did you only use HGG for training? what's the learning rate?

@zwlshine
I used both HGG and LGG for training. However I randomly chose part of the data as training dataset and the rest as test dataset. The first 20000 epoches I used 0.0005 as learning rate and then changed it to 0.0001 for the rest of training.

Thank you very much!
About change learning rate, do you mean when reach the 20000 epoch, you stop the process, then change learning_rate in parameters.ini file, then load the 20000 epoch's checkpoint as pre_weight for the rest of training?

@zwlshine exactly

from brats2019.

JohnleeHIT avatar JohnleeHIT commented on June 11, 2024

@JohnleeHIT
After about 30000 epoches of training, here is my result in train.log file:
image
As you can see: the WT part of the dice is quite close to the state-of-art, but the TC and ET parts have a long way to go. I've changed the learning-rate when dice does not improve.

Try training only with HGG data.

from brats2019.

Med-Process avatar Med-Process commented on June 11, 2024

@JohnleeHIT Hi, I also get the same result which is a gap with your accuracy. Is there any diffenence of the code with your own code?

from brats2019.

TIAN-Ww avatar TIAN-Ww commented on June 11, 2024

Hello, Sorry to bother you. I try your advise to train with only HGG data. The result truely improved from the last one. Average dice increased by 10 points. But I cannot reappear your best result and there is still a big gap. After tranning 20000 epochs , the train.log file as follow

[WT, TC, ET]:  average dice: [0.592, 0.323, 0.082]  mean average dice : 0.3323333333333333 average sensitivity: [0.969, 0.973, 0.085]  mean average sensitivity : 0.6756666666666667
[WT, TC, ET]:  average dice: [0.422, 0.365, 0.17]  mean average dice : 0.319 average sensitivity: [0.982, 0.965, 0.247]  mean average sensitivity : 0.7313333333333333
[WT, TC, ET]:  average dice: [0.692, 0.433, 0.242]  mean average dice : 0.45566666666666666 average sensitivity: [0.969, 0.969, 0.246]  mean average sensitivity : 0.7280000000000001
[WT, TC, ET]:  average dice: [0.45, 0.344, 0.014]  mean average dice : 0.26933333333333337 average sensitivity: [0.991, 0.979, 0.022]  mean average sensitivity : 0.664
[WT, TC, ET]:  average dice: [0.682, 0.407, 0.341]  mean average dice : 0.4766666666666666 average sensitivity: [0.979, 0.984, 0.413]  mean average sensitivity : 0.7919999999999999
[WT, TC, ET]:  average dice: [0.598, 0.389, 0.283]  mean average dice : 0.42333333333333334 average sensitivity: [0.983, 0.984, 0.322]  mean average sensitivity : 0.763
[WT, TC, ET]:  average dice: [0.679, 0.438, 0.252]  mean average dice : 0.4563333333333333 average sensitivity: [0.983, 0.977, 0.254]  mean average sensitivity : 0.738
[WT, TC, ET]:  average dice: [0.678, 0.439, 0.262]  mean average dice : 0.45966666666666667 average sensitivity: [0.985, 0.982, 0.274]  mean average sensitivity : 0.747
[WT, TC, ET]:  average dice: [0.691, 0.52, 0.097]  mean average dice : 0.43599999999999994 average sensitivity: [0.98, 0.978, 0.066]  mean average sensitivity : 0.6746666666666666
[WT, TC, ET]:  average dice: [0.634, 0.314, 0.349]  mean average dice : 0.4323333333333333 average sensitivity: [0.993, 0.998, 0.47]  mean average sensitivity : 0.8203333333333335
[WT, TC, ET]:  average dice: [0.675, 0.473, 0.034]  mean average dice : 0.3940000000000001 average sensitivity: [0.987, 0.991, 0.022]  mean average sensitivity : 0.6666666666666666
[WT, TC, ET]:  average dice: [0.673, 0.499, 0.39]  mean average dice : 0.5206666666666667 average sensitivity: [0.974, 0.98, 0.406]  mean average sensitivity : 0.7866666666666666
[WT, TC, ET]:  average dice: [0.678, 0.423, 0.261]  mean average dice : 0.454 average sensitivity: [0.988, 0.994, 0.307]  mean average sensitivity : 0.763
[WT, TC, ET]:  average dice: [0.769, 0.513, 0.349]  mean average dice : 0.5436666666666666 average sensitivity: [0.983, 0.992, 0.346]  mean average sensitivity : 0.7736666666666667
[WT, TC, ET]:  average dice: [0.717, 0.501, 0.336]  mean average dice : 0.518 average sensitivity: [0.989, 0.99, 0.314]  mean average sensitivity : 0.7643333333333334
[WT, TC, ET]:  average dice: [0.787, 0.546, 0.446]  mean average dice : 0.5930000000000001 average sensitivity: [0.982, 0.99, 0.41]  mean average sensitivity : 0.794
[WT, TC, ET]:  average dice: [0.671, 0.572, 0.389]  mean average dice : 0.5439999999999999 average sensitivity: [0.982, 0.978, 0.364]  mean average sensitivity : 0.7746666666666666
[WT, TC, ET]:  average dice: [0.745, 0.573, 0.276]  mean average dice : 0.5313333333333333 average sensitivity: [0.982, 0.986, 0.223]  mean average sensitivity : 0.7303333333333333
[WT, TC, ET]:  average dice: [0.783, 0.598, 0.336]  mean average dice : 0.5723333333333334 average sensitivity: [0.983, 0.989, 0.277]  mean average sensitivity : 0.7496666666666667
[WT, TC, ET]:  average dice: [0.76, 0.642, 0.379]  mean average dice : 0.5936666666666667 average sensitivity: [0.985, 0.98, 0.33]  mean average sensitivity : 0.765

As you can see, the best result is [WT, TC, ET]: average dice: [0.787, 0.546, 0.446] Do you think it might be the problem with the parameters that you set on the "parameters.ini" ?Or is there any other augment for trainning data. Because of the limited computing resources, I didn't do more experiments.

Hello, I saw in the comment area that you have run this code before. Did you modify the .py script outside the path when you run this code? I have encountered some problems, would you mind helping me?
Looking forward to your reply!
Best wishes

Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!

When only use HGG for training, I can get almost the same dice of WT and TC, but ET is lower, ET dice is 0.4.

I just git clone the latest version, so I am still training. Now I can't answer your question. It takes time. I will let you known when I finish it.

Hello, I saw in the comment area that you have run this code before. Did you modify the .py script outside the path when you run this code? I have encountered some problems, would you mind helping me?
Looking forward to your reply!
Best wishes

from brats2019.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.