Comments (24)
Since some of you guys can not get the result, I will retrain the code again these days, Please wiat for my updatas, thanks for your patience.
from brats2019.
Could I ask for the checkpoints file to do some test if you have saved these files? I will be appreciate if you could send me the download link. Thanks.
from brats2019.
May be you can decrease the learning rate when the dice did't improve, and see how it goes,
Sorry for that request of chckpoint file, its an result of a team, so it not appropriate to release right now.
I believe you will get the reuslt from the code.
from brats2019.
Have you reached the author's results now?
from brats2019.
I also didn't get the result as the author, so what's wrong with it? How to improve?
from brats2019.
No I cannot get the result as well. And I still don't know the reason.
from brats2019.
Restriction of GPU resource, we must use patch volume, can we resize whole volume to a small size, then training with this? Can dice be promoted?
from brats2019.
You'd better no to do that, resize the volume means resize the label at the same time, it will cause a lot of problems
from brats2019.
You'd better no to do that, resize the volume means resize the label at the same time, it will cause a lot of problems
Yes,you are right, this task is multi-class classification, it's different from binary classification. I even did this method for binary classification, it really can improve dice.
from brats2019.
@Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the result i show is the result of model ensemble, so the result of a single model would be slightly inferior to the result.
from brats2019.
@Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the result i show is the result of model ensemble, so the result of a single model would be slightly inferior to the result.
Hello,I have read your new code, I found only one function changed named softmax_weighted_loss, you reuse one line code: gt = produce_mask_background(gt, softmaxpred, self.fg_ratio,self.bg_ratio).
Except for this one, no others changes. I want to make sure with you!
And I have a question, in file parameters.ini about the learning rate. At first, lr=0.001, when reach plateu decrease to 0.0005. Where to do this in your code?
I found in function conv3d and Deconv3d in models.py, both do "slim.l2_regularizer(0.0005)", does this change the learning rate from 0.001 to 0.0005?
Thank you very much! I am a new learner, I'm sorry for so many questions, but your code is great, especially your model combination logic!
from brats2019.
@Lightning980729 @zwlshine I found some mistake in my code, and i have upload the new version, you should now get the right results. Sorry for the mistake. Besides, the result i show is the result of model ensemble, so the result of a single model would be slightly inferior to the result.
Hello,I have read your new code, I found only one function changed named softmax_weighted_loss, you reuse one line code: gt = produce_mask_background(gt, softmaxpred, self.fg_ratio,self.bg_ratio).
Except for this one, no others changes. I want to make sure with you!And I have a question, in file parameters.ini about the learning rate. At first, lr=0.001, when reach plateu decrease to 0.0005. Where to do this in your code?
I found in function conv3d and Deconv3d in models.py, both do "slim.l2_regularizer(0.0005)", does this change the learning rate from 0.001 to 0.0005?Thank you very much! I am a new learner, I'm sorry for so many questions, but your code is great, especially your model combination logic!
You'd better git clone the latest version, several places have changed. For learning rate, I Just change the learning rate in the config file when the dice did't incease.
from brats2019.
I'm sure the only worked change is in the function softmax_weighted_loss. Some other changes like: fractal_net in models.py, and self.is_global_path in operations.py are all commented ones.
from brats2019.
Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!
from brats2019.
Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!
When only use HGG for training, I can get almost the same dice of WT and TC, but ET is lower, ET dice is 0.4.
from brats2019.
Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!
When only use HGG for training, I can get almost the same dice of WT and TC, but ET is lower, ET dice is 0.4.
I just git clone the latest version, so I am still training. Now I can't answer your question. It takes time. I will let you known when I finish it.
from brats2019.
@JohnleeHIT
After about 30000 epoches of training, here is my result in train.log file:
As you can see: the WT part of the dice is quite close to the state-of-art, but the TC and ET parts have a long way to go. I've changed the learning-rate when dice does not improve.
from brats2019.
@siyuanSsun
Did you only use HGG for training? what's the learning rate?
from brats2019.
@siyuanSsun
Did you only use HGG for training? what's the learning rate?
@zwlshine
I used both HGG and LGG for training. However I randomly chose part of the data as training dataset and the rest as test dataset. The first 20000 epoches I used 0.0005 as learning rate and then changed it to 0.0001 for the rest of training.
from brats2019.
@siyuanSsun
Did you only use HGG for training? what's the learning rate?@zwlshine
I used both HGG and LGG for training. However I randomly chose part of the data as training dataset and the rest as test dataset. The first 20000 epoches I used 0.0005 as learning rate and then changed it to 0.0001 for the rest of training.
Thank you very much!
About change learning rate, do you mean when reach the 20000 epoch, you stop the process, then change learning_rate in parameters.ini file, then load the 20000 epoch's checkpoint as pre_weight for the rest of training?
from brats2019.
@siyuanSsun
Did you only use HGG for training? what's the learning rate?@zwlshine
I used both HGG and LGG for training. However I randomly chose part of the data as training dataset and the rest as test dataset. The first 20000 epoches I used 0.0005 as learning rate and then changed it to 0.0001 for the rest of training.Thank you very much!
About change learning rate, do you mean when reach the 20000 epoch, you stop the process, then change learning_rate in parameters.ini file, then load the 20000 epoch's checkpoint as pre_weight for the rest of training?
@zwlshine exactly
from brats2019.
@JohnleeHIT
After about 30000 epoches of training, here is my result in train.log file:
As you can see: the WT part of the dice is quite close to the state-of-art, but the TC and ET parts have a long way to go. I've changed the learning-rate when dice does not improve.
Try training only with HGG data.
from brats2019.
@JohnleeHIT Hi, I also get the same result which is a gap with your accuracy. Is there any diffenence of the code with your own code?
from brats2019.
Hello, Sorry to bother you. I try your advise to train with only HGG data. The result truely improved from the last one. Average dice increased by 10 points. But I cannot reappear your best result and there is still a big gap. After tranning 20000 epochs , the train.log file as follow
[WT, TC, ET]: average dice: [0.592, 0.323, 0.082] mean average dice : 0.3323333333333333 average sensitivity: [0.969, 0.973, 0.085] mean average sensitivity : 0.6756666666666667 [WT, TC, ET]: average dice: [0.422, 0.365, 0.17] mean average dice : 0.319 average sensitivity: [0.982, 0.965, 0.247] mean average sensitivity : 0.7313333333333333 [WT, TC, ET]: average dice: [0.692, 0.433, 0.242] mean average dice : 0.45566666666666666 average sensitivity: [0.969, 0.969, 0.246] mean average sensitivity : 0.7280000000000001 [WT, TC, ET]: average dice: [0.45, 0.344, 0.014] mean average dice : 0.26933333333333337 average sensitivity: [0.991, 0.979, 0.022] mean average sensitivity : 0.664 [WT, TC, ET]: average dice: [0.682, 0.407, 0.341] mean average dice : 0.4766666666666666 average sensitivity: [0.979, 0.984, 0.413] mean average sensitivity : 0.7919999999999999 [WT, TC, ET]: average dice: [0.598, 0.389, 0.283] mean average dice : 0.42333333333333334 average sensitivity: [0.983, 0.984, 0.322] mean average sensitivity : 0.763 [WT, TC, ET]: average dice: [0.679, 0.438, 0.252] mean average dice : 0.4563333333333333 average sensitivity: [0.983, 0.977, 0.254] mean average sensitivity : 0.738 [WT, TC, ET]: average dice: [0.678, 0.439, 0.262] mean average dice : 0.45966666666666667 average sensitivity: [0.985, 0.982, 0.274] mean average sensitivity : 0.747 [WT, TC, ET]: average dice: [0.691, 0.52, 0.097] mean average dice : 0.43599999999999994 average sensitivity: [0.98, 0.978, 0.066] mean average sensitivity : 0.6746666666666666 [WT, TC, ET]: average dice: [0.634, 0.314, 0.349] mean average dice : 0.4323333333333333 average sensitivity: [0.993, 0.998, 0.47] mean average sensitivity : 0.8203333333333335 [WT, TC, ET]: average dice: [0.675, 0.473, 0.034] mean average dice : 0.3940000000000001 average sensitivity: [0.987, 0.991, 0.022] mean average sensitivity : 0.6666666666666666 [WT, TC, ET]: average dice: [0.673, 0.499, 0.39] mean average dice : 0.5206666666666667 average sensitivity: [0.974, 0.98, 0.406] mean average sensitivity : 0.7866666666666666 [WT, TC, ET]: average dice: [0.678, 0.423, 0.261] mean average dice : 0.454 average sensitivity: [0.988, 0.994, 0.307] mean average sensitivity : 0.763 [WT, TC, ET]: average dice: [0.769, 0.513, 0.349] mean average dice : 0.5436666666666666 average sensitivity: [0.983, 0.992, 0.346] mean average sensitivity : 0.7736666666666667 [WT, TC, ET]: average dice: [0.717, 0.501, 0.336] mean average dice : 0.518 average sensitivity: [0.989, 0.99, 0.314] mean average sensitivity : 0.7643333333333334 [WT, TC, ET]: average dice: [0.787, 0.546, 0.446] mean average dice : 0.5930000000000001 average sensitivity: [0.982, 0.99, 0.41] mean average sensitivity : 0.794 [WT, TC, ET]: average dice: [0.671, 0.572, 0.389] mean average dice : 0.5439999999999999 average sensitivity: [0.982, 0.978, 0.364] mean average sensitivity : 0.7746666666666666 [WT, TC, ET]: average dice: [0.745, 0.573, 0.276] mean average dice : 0.5313333333333333 average sensitivity: [0.982, 0.986, 0.223] mean average sensitivity : 0.7303333333333333 [WT, TC, ET]: average dice: [0.783, 0.598, 0.336] mean average dice : 0.5723333333333334 average sensitivity: [0.983, 0.989, 0.277] mean average sensitivity : 0.7496666666666667 [WT, TC, ET]: average dice: [0.76, 0.642, 0.379] mean average dice : 0.5936666666666667 average sensitivity: [0.985, 0.98, 0.33] mean average sensitivity : 0.765
As you can see, the best result is [WT, TC, ET]: average dice: [0.787, 0.546, 0.446] Do you think it might be the problem with the parameters that you set on the "parameters.ini" ?Or is there any other augment for trainning data. Because of the limited computing resources, I didn't do more experiments.
Hello, I saw in the comment area that you have run this code before. Did you modify the .py script outside the path when you run this code? I have encountered some problems, would you mind helping me?
Looking forward to your reply!
Best wishes
Hello,I can't get the best result. My best reslt is average dice: [0.603,0.62,0.584]. Do you know how solve it? Thanks!
When only use HGG for training, I can get almost the same dice of WT and TC, but ET is lower, ET dice is 0.4.
I just git clone the latest version, so I am still training. Now I can't answer your question. It takes time. I will let you known when I finish it.
Hello, I saw in the comment area that you have run this code before. Did you modify the .py script outside the path when you run this code? I have encountered some problems, would you mind helping me?
Looking forward to your reply!
Best wishes
from brats2019.
Related Issues (19)
- after tranning 9000 epochs the matrice of dice is low(specially for ET) HOT 5
- list assignment index out of range HOT 5
- Error occurs when data's brain region size lower than patch size HOT 1
- why is there a problem "AttributeError: 'BatchGenerator' object has no attribute 'next'? HOT 3
- directory name is invalid
- 请问测试集从何得到 HOT 2
- Are you willing to provide the outcome of your training?
- I want to know how to generate the gif in your readme, thank you very much... HOT 4
- Will you provide pre-train model?
- the purpose of the 359 lines of code "input_concat" in operations.py HOT 1
- model HOT 3
- Index error: list index out of range HOT 1
- RuntimeError: Graph is finalized and cannot be modified.
- In model unet_resnet, why do expand_dims for input_pred_softmax? HOT 1
- You load pre-weights only for the first mode named unet,did you? HOT 2
- Why not do N4BiasFieldCorrection? HOT 2
- Definition of all_stages_loss, how to choose suitable coefficient for separate stage loss? HOT 2
- Cross validation may be better HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from brats2019.