Comments (11)
hi, can you please elaborate on the issue ? I will update test.py soon.
from medical-transformer.
Thank you for your reply. I reproduced your code on the Glas dataset and tested it with the trained model. The training settings are the same as the original text. The experimental environment is windows system, two 1080Ti GPUs, and pytorch 1.2.
There is no complete test code in the test folder, so I quoted the two test indicators classwise_iou and classwise_f1 in metrics.py for testing, but the test results statistics show that the test accuracy of the trained model is lower than about 20% of the original text. The specific code is as follows:
total_iou = []
total_f1_score = []
...
iou = classwise_iou(y_out, y_batch)
precision, recall, f1_score = classwise_f1(y_out, y_batch)
total_iou.append(iou)
total_f1_score.append(f1_score.data.cpu().numpy())
print('mean iou is',np.mean(total_iou))
print('mean f1_score is',np.mean(total_f1_score))
And when I used my own evaluation index to test, the iou index was also significantly lower than the result given in the article. The code used is as follows:
def iou_score(output, target):
smooth = 1e-5
if torch.is_tensor(output):
output = torch.sigmoid(output).data.cpu().numpy()
if torch.is_tensor(target):
target = target.data.cpu().numpy()
output_ = output > 0.5
target_ = target > 0.5
intersection = (output_ & target_).sum()
union = (output_ | target_).sum()
return (intersection + smooth) / (union + smooth)
Others are similar to the source code. I can't find out the error of the test accuracy by myself, please give me some advice and update the test code in time. Thank you very much!
from medical-transformer.
Thank you for sharing your code!
If you read the performance metrics code that provided by the author, see performancemetrics_ax in MATLAB, you will find there probably has a mistake in calculating F1 Score and mIoU.
if (tp~=0)
F = (2tp)/(2tp+fp+fn);
MIU=[MIU,(tp1.0/uni)];
PA=[PA,(tp1.0/ttp)];
Fsc=[Fsc;[i,F]];
% elseif (lab==0)
% MIU=[MIU,1];
% PA=[PA,1];
% Fsc=[Fsc;[i,1]];
else
MIU=[MIU,1];
PA=[PA,1];
Fsc=[Fsc;[i,1]];
end
I think here we should keep the elseif (lab==0) part, because TP== 0 may due to the totally wrong prediction, eg. prediction result shows no foreground/target pixels, but it does exist in mask/label img. In this case, this code will set 1 as IoU, Precision and F1, and result in a very high performance.
from medical-transformer.
how about
elseif (lab~=0) && (tp==0)
MIU=[MIU,0];
PA=[PA,0];
Fsc=[Fsc;[i,0]];
from medical-transformer.
Hi,
Thanks for the interest in our work.
@hgmlu: For finding the performance metrics, please use the matlab code to find the scores. test.py is just used to find the predictions. the performance metrics were evaluated using the matlab code provided in the repo.
@pqu2 I agree that having the condition (lab~=0) && (tp==0)is a better option. tp~=0 condition was set to take care of the condition where the label is blank ( mainly seen in US dataset ). So, in other datasets this condition satisfies every time as the predictions are never so bad to give even a single tp. The same code was used for evaluation of all the baseline network predictions as well.
from medical-transformer.
Thanks for your reply and sharing. I will retest the training model and add this part of the code.
from medical-transformer.
hi,I have retest my trained model on the Glas dataset by performancemetrics_glas.m and compared the test result(F1-score and IoU), the result still lower than the index in the paper(0.7233 in F1-score, 0.5771 in IoU), can you upload your trined models on three dataset?By the way, when will you make the US dataset public?
from medical-transformer.
Hi, The glas dataset was resized to 128 x 128 resolution before carrying the experiments. Yes, the pretrained models and the Brain US dataset will be made available soon.
from medical-transformer.
Hi, The glas dataset was resized to 128 x 128 resolution before carrying the experiments. Yes, the pretrained models and the Brain US dataset will be made available soon.
Thank you for your reply. That's great if you could share your pretrained model. Thanks again
from medical-transformer.
Thank you for your excellent contribution. I tried with your tutorial on Colab but the accuracy is different from the results on your paper. Particularly, on Colab (F1, IoU) = (0.646, 0.84); in your paper: (F1, IoU) = (0.7955, 0.6617). Can you please explain why?
from medical-transformer.
The colab code is actually an unofficial implementation for quick train/test. I will write an official Colab code soon.
from medical-transformer.
Related Issues (20)
- input data HOT 1
- Question about the TransWeather HOT 1
- 请教,einsum()函数报错 HOT 4
- can offer the Pre-training weight?
- val_dataset is Test_dataset? HOT 1
- Brain US dataset HOT 2
- Brain Anatomy US dataset? HOT 1
- Brain Anatomy US HOT 1
- The type of "args.train_dataset" and "args.val_dataset" in test.py is "None"
- The "result" images from the training are all black, what could be the problem?
- About CUDA out of memory HOT 2
- Is this GLAS Dataset right? HOT 1
- Can it applied to other medical tasks?
- Request for GLAS and MoNuSeG preprocessed datasets. HOT 3
- Does it support for mac os with M1?
- RuntimeError: einsum(): subscript i has size 64 for operand 1 which does not broadcast with previously seen size 500 HOT 2
- Brain US dataset
- Experimental results
- Results HOT 1
- img_size And corp size
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from medical-transformer.