meteorshub / glia-net Goto Github PK
View Code? Open in Web Editor NEWA segmentation network for intracranial aneurysm on CTA images using pytorch
A segmentation network for intracranial aneurysm on CTA images using pytorch
I test the released checkpoint on the internal test set, however, the performance drops a lot compared with the performance in the published paper. I wonder how I reproduce the published performance. Retrain?
I am training the model using gpu, with batch size = 16, and everything is fine.
The cpu memory keeps growing during evaluation. How to fix it?
I've tried torch.no_grad()
and torch.cuda.empty_cache()
, but they don't work.
Hi, when evaluating performance on target level, what rule u use to see whether 'predict' and 'ground truth' are matched?
I found code related to the judgement in metrics.py. but what exactly it measures?
def _is_match(center_1, area_1, center_2, area_2):
ndim = len(center_1)
if sum([(center_1[i] - center_2[i]) ** 2 for i in range(ndim)]) ** 0.5 < (
0.62 * (area_1 ** (1 / ndim) + area_2 ** (1 / ndim))): # for 3d case using 0.62 factor
return True
return False
Hey, I tried many times but I still can't download your datasets according to tips you offered on 25 Jan.
So could you please provide another ip.
Thanks a lot!
How dose the overall metrics value get?
When I use your evaluate_per_case.py and metrics.py to calculate the metrics on internal test set, I found the overall velue is different from the actual average of these metrics
ap auc precision recall sensitivity specificity dsc
0.55 0.78 0.83 0.55 0.55 1.00 0.66
0.48 0.77 0.59 0.53 0.53 1.00 0.51
the first line on the above table is the overall velue I get from the logging file using your evaluate_per_case.py and metrics.py code.
the second line on the above table is the average of all 152 internal cases matrics.
I wonder why this inconsistent happends?
I used the default parameters and 512 * 512 image training,and still got the error
Thanks for sharing the dataset and thanks for your work. I have download the 1186 internal training data from the server, but I did not find the 152 internal test data. Can you share the test data on the server, thank you so much!
Hey,
I believe I have found a bug in this project.
When I train a GLIA-Net network, the total, the local and the global average losses are all equal after each epoch.
An example follows:
2021-09-04 23:39:19 [MainThread] INFO [TaskAneurysmSegTrainer] - (Time epoch: 6081.78)train epoch 57/66 finished. total_loss_avg: 0.1874 local_loss_avg: 0.1874 global_loss_avg: 0.1874 ap: 0.1771 auc: 0.9058 precision: 0.6890 recall: 0.0313 dsc: 0.0598 hd95: 20.5659 per_target_precision: 0.0385 per_target_recall: 0.0057
I believe the problem originates from the initialization of the OrderedDict
s avg_losses
and eval_avg_losses
, created in the following lines:
Line 473 in 8d4dfb2
Line 570 in 8d4dfb2
The exact problem is that the list containing the 3 RunningAverage
s are created using the notation [a]*n
. This notation uses a shared reference for the 3 RunningAverage
s. Therefore, when we try to update a RunningAverage
, all three losses are updated together.
A solution for this problem would be to use this initialization [RunningAverage() for _ in range(len(losses))]
instead of [RunningAverage()] * len(losses)
.
This solution seams to work for me.
An example follows:
2021-09-08 20:58:40 [MainThread] INFO [TaskAneurysmSegTrainer] - (Time epoch: 6101.42)train epoch 86/86 finished. total_loss_avg: 0.2528 local_loss_avg: 0.2200 global_loss_avg: 0.0328 ap: 0.4419 auc: 0.9313 precision: 0.5942 recall: 0.3963 dsc: 0.4755 hd95: 12.1462 per_target_precision: 0.1321 per_target_recall: 0.1637
And as we can see the total average loss is equal to the sum of the local and global average losses.
Great work again!
Best regards,
Gabriel
I have tried many times to use my MRA-TOF data for training, but I still haven't succeeded to train a model that can work. The metrics can not be calculated correctly due to the wrong fp, tp and so on. (For example, The precision results always 0.0000 or 0.5000)
So, how can I change code to train my MRA data successfully? Maybe the spacing? Normalization ? or....?
I think the main question is the difference between MRA and CTA data, I have checked the other possible questions like num_classes and so on. I tried to affirm the difference between my data and your CTA data and tried to change the MRA data like CTA data, but it still failed to work.
Need help :( Maybe you can give me some suggestions about how to preprocess the MRA data or how to change the code..?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.