GithubHelp home page GithubHelp logo

mic-dkfz / medicaldetectiontoolkit Goto Github PK

View Code? Open in Web Editor NEW
1.3K 54.0 299.0 4.46 MB

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

License: Apache License 2.0

Python 84.65% Cuda 8.27% C 7.07%
detection retina-unet object-detection mask-rcnn 3d-models deep-learning retina-net deep-neural-networks segmentation semantic-segmentation

medicaldetectiontoolkit's People

Contributors

pfjaeger avatar pfjaegerfb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

medicaldetectiontoolkit's Issues

Dataloader in lidc_exp

What is the format of the input in lidc_exp? I followed preprocessing.py, and set my dataset folder as :

path/to/dataset/pp_norm/xxxxxx_img.npy
path/to/dataset/pp_norm/xxxxxx_rois.npy
...
...
path/to/dataset/pp_norm/info_df.pickle

However, when I run the training, the program got struck at exec.py: batch = next(batch_gen['train']).
There should be some bugs when I prepared my input.
Hope to get some help, thanks!

Pytorch 1.0

Hi! Are you planning to migrate to Pytorch 1.0?

Help! Training Screen Freeze

Hello Sir Paul, after modifying lidc data loader, configs, and preprocessing code for my custom Lung CT scan private datasets with 3 classification classes (solid, subsolid, groundglass),
I have problem visualize the progress of training since the train_batches just visualize from nothing into 75/200, and then stuck when doing the validation at epoch 1 (I don't really sure whether it's still validating or stuck since the nvidia-smi GPU memory consumption still the same Tesla P100 9GB/16GB).

Parameter I used are:

self.num_epochs = 100
self.num_train_batches = 200 if self.dim == 2 else 200
self.batch_size = 32 if self.dim == 2 else 16

Composition of dataset loaded

data set loaded with: 41 train / 15 val / 15 test patients

Is it normal that the train_batch will visualize per 75 train_batch and the validation of the datasets (I use val_patient) taking a long time?

Thank you Sir Paul

HELP

I running the preprocessing.py and meet an error
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\medicaldetectiontoolkit-master\experiments\lidc_exp\preprocessing.py", line 56, in pp_patient
img_arr = resample_array(img_arr, img.GetSpacing(), cf.target_spacing)
File "C:\medicaldetectiontoolkit-master\experiments\lidc_exp\preprocessing.py", line 44, in resample_array
resampled_img = tf.resize(img, target_shape, order=1, clip=True, mode='edge')
File "C:\ProgramData\Anaconda3\lib\site-packages\skimage\transform_warps.py", line 177, in resize
indexing='ij'))
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 4060, in meshgrid
output = [x.copy() for x in output]
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 4060, in
output = [x.copy() for x in output]
MemoryError
"""

Training the model for custom datatset

Hi @pfjaeger,
Thanks for your simplified implementation of Retina U-Net in pytorch.

I have a custom 2d image dataset which has 5 classes, I would like to train a Retina U-Net model for that? Where do I get started, what changes need to be made?

Can you please upload the toy dataset? I'm going through the code but unable to follow.

Is it possible to do initial preprocessing using CUDA?

Hello Sir Paul, it is almost 48 hours since I start converting LIDC-IDRI datasets (1010 patients) into nrrd (nifty) file before preprocessed again into numpy. However, it still load 60% of the progress
For my reference when preprocess my private dataset, do you have any solution or code to add in order to enchance the conversion using CUDA (Note: I am planning to reconstruct my database using Sir Michael code
Is it possible to enchance the speed of conversion using CUDA core, since Sir Michael are using 3rd party MITK software to convert it to nrrd (nifty) file?

Thank you Sir

PC Used:
Intel i7-7700, 16GB RAM, NVIDIA GTX1050Ti 4GB

training LIDC dataset with maskrcnn

I trained the LIDC dataset with maskrcnn network, and observed the loss function of training as follows.
newplot
In the early stage of training,
dcount[0,0]mrcnn_bbox: 0.00, mrcnn_mask: 0.00, dcount[0,0],
total loss decreased rapidly.
In the middle stage of training,
mrcnn_class: 0.75, mrcnn_bbox: 1.05, mrcnn_mask: 0.67, dcount[0, 3]
, causing total Loss to increase suddenly and then decrease slowly.
PS. The default configuration is used on GitHub.
Thanks.

Bacth generation for LIDC dataset

Hello Paul,

Thank you for your repo, you did amazing work! I executed your code with LIDC dataset and it works.

I work on my custom modification of 3D RetinaNet in Keras. But it doesn't work. I'm going to figure out with your pipeline and maybe it will help me to find bugs in my code.

I visualized batch from LIDC BacthGenerator and I can't figure out one thing. In the segmentation mask, nonzero values are present only in 2 samples (samples 4 and 5). But, at the same time, the class_target is as follows:

Class target: [list([]) list([0]) list([1, 0]) list([1, 0])
list([0, 1, 1, 1, 0, 1, 1, 0]) list([0]) list([0]) list([1])]

I thought the class_target contains labels of the belonging of nodules to a class. I checked other batches and there is the same situation. I will be very appreciated if you explain why the class_target looks like this.

ROI suppresor on LIDC preprocessing.py

Hello Sir Paul, thank you for your answer last time, regarding ROI suppresor on LIDC, I have some question
If my dataset only contain one final annotation per patients (one final annotation by head radiologist, instead of multiple radiologist annotation), will the ROI suppresed or not?

I hope my information help you identify the problem:

  1. My private dataset of Body (Thorax) and Lung CT scan with slice thickness 0.5 and 1.0
  2. When preprocessed using modified LIDC-IDRI-Processing almost 80% of data filtered out because incompatible DICOM when loaded with MitkCLDicomtoNRRD. --> I already ask at MIC-DKFZ/LIDC-IDRI-processing#5, but no respond yet, maybe you know where the problem is?
  3. After passing it through, I run lidc_exp/preprocessing.py and almost all (90%) of RoIs are suppresed eventhough the RoI is already confimed to be a single RoI (not multiple RoI).

Could you give me some hint, how to unsuppress RoI at preprocessing.py line 75-102?
Thank you Sir @pfjaeger.

HELP! Error during training LIDC dataset

Hello Sir. Paul, I already converted the LIDC Database, however after I run python exec.py --mode train --exp_source experiments/lidc_exp/ --exp_dir LIDC-Retina-model, the training stuck (it shows validate) on folds 1. Note: I change num_epoch into 50 and num_trainbatches into 10 since I just use 10 sample dataset.

CLI message:

starting training epoch 50
tr. batch 1/10 (ep. 50) fw 2.251s / bw 0.743s / total 2.993s || loss: 1.03, class: 0.89, bbox: 0.14
tr. batch 2/10 (ep. 50) fw 2.532s / bw 0.744s / total 3.276s || loss: 0.89, class: 0.66, bbox: 0.23
tr. batch 3/10 (ep. 50) fw 2.392s / bw 0.742s / total 3.134s || loss: 0.74, class: 0.73, bbox: 0.01
tr. batch 4/10 (ep. 50) fw 2.535s / bw 0.517s / total 3.053s || loss: 0.47, class: 0.47, bbox: 0.00
tr. batch 5/10 (ep. 50) fw 3.106s / bw 0.744s / total 3.850s || loss: 0.78, class: 0.71, bbox: 0.08
tr. batch 6/10 (ep. 50) fw 2.920s / bw 0.742s / total 3.662s || loss: 0.52, class: 0.49, bbox: 0.03
tr. batch 7/10 (ep. 50) fw 2.220s / bw 0.747s / total 2.967s || loss: 0.67, class: 0.56, bbox: 0.11
tr. batch 8/10 (ep. 50) fw 2.164s / bw 0.758s / total 2.921s || loss: 0.57, class: 0.51, bbox: 0.06
tr. batch 9/10 (ep. 50) fw 2.333s / bw 0.750s / total 3.082s || loss: 0.80, class: 0.70, bbox: 0.10
tr. batch 10/10 (ep. 50) fw 2.390s / bw 0.760s / total 3.150s || loss: 0.70, class: 0.66, bbox: 0.03
evaluating in mode train
evaluating with match_iou: 0.1
starting validation in mode val_sampling.
evaluating in mode val_sampling
evaluating with match_iou: 0.1
non none scores: [0.00000000e+00 0.00000000e+00 0.00000000e+00 1.33691776e-04
1.12577370e-05 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
3.19541394e-06 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.34394073e-05 3.46760788e-04 0.00000000e+00 6.57964466e-05
6.30265885e-06 1.83419772e-04 0.00000000e+00 0.00000000e+00
3.13401814e-05 0.00000000e+00 0.00000000e+00 8.20894272e-05
4.21034540e-06 1.00719716e-03 7.65382661e-07 1.39219383e-05
7.98896203e-04 0.00000000e+00 2.30329873e-04 2.08085640e-04
1.10898187e-06 0.00000000e+00 0.00000000e+00 1.11219310e-05
1.91517091e-04 1.70706726e-04 1.07269665e-06 0.00000000e+00
0.00000000e+00 4.47997328e-05 0.00000000e+00 1.04838946e-06
1.86664529e-03 5.89871320e-06 1.97787268e-04]
trained epoch 50: took 212.29711294174194 sec. (41.897600412368774 train / 170.39951252937317 val)
plotting predictions from validation sampling.
starting testing model of fold 0 in exp LIDC-Retina-TrainTest
feature map shapes: [[32 32 64]
[16 16 32]
[ 8 8 16]
[ 4 4 8]]
anchor scales: {'z': [[2, 2.5198420997897464, 3.1748021039363987], [4, 5.039684199579493, 6.3496042078727974], [8, 10.079368399158986, 12.699208415745595], [16, 20.15873679831797, 25.39841683149119]], 'xy': [[8, 10.079368399158986, 12.699208415745595], [16, 20.15873679831797, 25.39841683149119], [32, 40.31747359663594, 50.79683366298238], [64, 80.63494719327188, 101.59366732596476]]}
level 0: built anchors (589824, 6) / expected anchors 589824 ||| total build (589824, 6) / total expected 673920
level 1: built anchors (73728, 6) / expected anchors 73728 ||| total build (663552, 6) / total expected 673920
level 2: built anchors (9216, 6) / expected anchors 9216 ||| total build (672768, 6) / total expected 673920
level 3: built anchors (1152, 6) / expected anchors 1152 ||| total build (673920, 6) / total expected 673920
using default pytorch weight init
subset: selected 2 instances from df
data set loaded with: 2 test patients
tmp ensembling over rank_ix:0 epoch:LIDC-Retina-TrainTest/fold_0/48_best_params.pth
evaluating patient 0009a for fold 0
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
evaluating patient 0003a for fold 0
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
tmp ensembling over rank_ix:1 epoch:LIDC-Retina-TrainTest/fold_0/29_best_params.pth
evaluating patient 0009a for fold 0
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
evaluating patient 0003a for fold 0
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
tmp ensembling over rank_ix:2 epoch:LIDC-Retina-TrainTest/fold_0/32_best_params.pth
evaluating patient 0009a for fold 0
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
evaluating patient 0003a for fold 0
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
tmp ensembling over rank_ix:3 epoch:LIDC-Retina-TrainTest/fold_0/17_best_params.pth
evaluating patient 0009a for fold 0
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
evaluating patient 0003a for fold 0
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
tmp ensembling over rank_ix:4 epoch:LIDC-Retina-TrainTest/fold_0/34_best_params.pth
evaluating patient 0009a for fold 0
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
forwarding (patched) patient with shape: (180, 1, 128, 128, 64)
evaluating patient 0003a for fold 0
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
forwarding (patched) patient with shape: (216, 1, 128, 128, 64)
finished predicting test set. starting post-processing of predictions.
applying wcs to test set predictions with iou = 1e-05 and n_ens = 20.
applying 2Dto3D merging to test set predictions with iou = 0.1.
evaluating in mode test
evaluating with match_iou: 0.1
/home/ivan/.virtualenvs/virtual-py3/lib/python3.5/site-packages/numpy/core/fromnumeric.py:2920: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/home/ivan/.virtualenvs/virtual-py3/lib/python3.5/site-packages/numpy/core/_methods.py:85: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
/home/ivan/.virtualenvs/virtual-py3/lib/python3.5/site-packages/matplotlib/axes/_base.py:3364: UserWarning: Attempting to set identical bottom==top results
in singular transformations; automatically expanding.
bottom=1.0, top=1.0
self.set_ylim(upper, lower, auto=None)
Logging to LIDC-Retina-TrainTest/fold_1/exec.log
performing training in 3D over fold 1 on experiment LIDC-Retina-TrainTest with model retina_net
performing training in 3D over fold 1 on experiment LIDC-Retina-TrainTest with model retina_net
feature map shapes: [[32 32 64]
[16 16 32]
[ 8 8 16]
[ 4 4 8]]
feature map shapes: [[32 32 64]
[16 16 32]
[ 8 8 16]
[ 4 4 8]]
anchor scales: {'z': [[2, 2.5198420997897464, 3.1748021039363987], [4, 5.039684199579493, 6.3496042078727974], [8, 10.079368399158986, 12.699208415745595], [16, 20.15873679831797, 25.39841683149119]], 'xy': [[8, 10.079368399158986, 12.699208415745595], [16, 20.15873679831797, 25.39841683149119], [32, 40.31747359663594, 50.79683366298238], [64, 80.63494719327188, 101.59366732596476]]}
anchor scales: {'z': [[2, 2.5198420997897464, 3.1748021039363987], [4, 5.039684199579493, 6.3496042078727974], [8, 10.079368399158986, 12.699208415745595], [16, 20.15873679831797, 25.39841683149119]], 'xy': [[8, 10.079368399158986, 12.699208415745595], [16, 20.15873679831797, 25.39841683149119], [32, 40.31747359663594, 50.79683366298238], [64, 80.63494719327188, 101.59366732596476]]}
level 0: built anchors (589824, 6) / expected anchors 589824 ||| total build (589824, 6) / total expected 673920
level 0: built anchors (589824, 6) / expected anchors 589824 ||| total build (589824, 6) / total expected 673920
level 1: built anchors (73728, 6) / expected anchors 73728 ||| total build (663552, 6) / total expected 673920
level 1: built anchors (73728, 6) / expected anchors 73728 ||| total build (663552, 6) / total expected 673920
level 2: built anchors (9216, 6) / expected anchors 9216 ||| total build (672768, 6) / total expected 673920
level 2: built anchors (9216, 6) / expected anchors 9216 ||| total build (672768, 6) / total expected 673920
level 3: built anchors (1152, 6) / expected anchors 1152 ||| total build (673920, 6) / total expected 673920
level 3: built anchors (1152, 6) / expected anchors 1152 ||| total build (673920, 6) / total expected 673920
using default pytorch weight init
using default pytorch weight init
loading dataset and initializing batch generators...
loading dataset and initializing batch generators...
data set loaded with: 6 train / 2 val / 2 test patients
data set loaded with: 6 train / 2 val / 2 test patients
starting training epoch 1
starting training epoch 1
tr. batch 1/10 (ep. 1) fw 1.901s / bw 0.557s / total 2.458s || loss: 0.55, class: 0.55, bbox: 0.00
tr. batch 1/10 (ep. 1) fw 1.901s / bw 0.557s / total 2.458s || loss: 0.55, class: 0.55, bbox: 0.00
tr. batch 2/10 (ep. 1) fw 2.057s / bw 0.777s / total 2.834s || loss: 0.77, class: 0.69, bbox: 0.08
tr. batch 2/10 (ep. 1) fw 2.057s / bw 0.777s / total 2.834s || loss: 0.77, class: 0.69, bbox: 0.08
tr. batch 3/10 (ep. 1) fw 1.838s / bw 0.515s / total 2.353s || loss: 0.77, class: 0.77, bbox: 0.00
tr. batch 3/10 (ep. 1) fw 1.838s / bw 0.515s / total 2.353s || loss: 0.77, class: 0.77, bbox: 0.00
tr. batch 4/10 (ep. 1) fw 1.803s / bw 0.741s / total 2.544s || loss: 0.94, class: 0.83, bbox: 0.11
tr. batch 4/10 (ep. 1) fw 1.803s / bw 0.741s / total 2.544s || loss: 0.94, class: 0.83, bbox: 0.11
tr. batch 5/10 (ep. 1) fw 1.717s / bw 0.741s / total 2.458s || loss: 0.85, class: 0.76, bbox: 0.09
tr. batch 5/10 (ep. 1) fw 1.717s / bw 0.741s / total 2.458s || loss: 0.85, class: 0.76, bbox: 0.09
tr. batch 6/10 (ep. 1) fw 1.654s / bw 0.744s / total 2.398s || loss: 1.07, class: 0.90, bbox: 0.17
tr. batch 6/10 (ep. 1) fw 1.654s / bw 0.744s / total 2.398s || loss: 1.07, class: 0.90, bbox: 0.17
tr. batch 7/10 (ep. 1) fw 2.217s / bw 0.742s / total 2.959s || loss: 0.80, class: 0.69, bbox: 0.11
tr. batch 7/10 (ep. 1) fw 2.217s / bw 0.742s / total 2.959s || loss: 0.80, class: 0.69, bbox: 0.11
tr. batch 8/10 (ep. 1) fw 1.733s / bw 0.740s / total 2.473s || loss: 0.80, class: 0.69, bbox: 0.12
tr. batch 8/10 (ep. 1) fw 1.733s / bw 0.740s / total 2.473s || loss: 0.80, class: 0.69, bbox: 0.12
tr. batch 9/10 (ep. 1) fw 1.709s / bw 0.750s / total 2.459s || loss: 1.07, class: 0.89, bbox: 0.18
tr. batch 9/10 (ep. 1) fw 1.709s / bw 0.750s / total 2.459s || loss: 1.07, class: 0.89, bbox: 0.18
tr. batch 10/10 (ep. 1) fw 2.189s / bw 0.743s / total 2.932s || loss: 1.06, class: 0.89, bbox: 0.17
tr. batch 10/10 (ep. 1) fw 2.189s / bw 0.743s / total 2.932s || loss: 1.06, class: 0.89, bbox: 0.17
evaluating in mode train
evaluating in mode train
evaluating with match_iou: 0.1
evaluating with match_iou: 0.1
starting validation in mode val_sampling.

It just stuck at starting validation for more than 4 hours. Please help me.
Thank you in advance Sir.

Input data axis ordering

Hi,
I have a question regarding the shape of input data. In lidc_exp/data_loader.py, the class <BatchGenerator> will return an dictionary with order (b, c, x, y, (z)). But in models/backbone.py, the forward fucntion will take images of shape (b, c, y, x, (z)). Where is the conversion between the shapes done and is there a reason for it? And is there a reason for not using the standard order (b, c, (z), y, x)? Thanks.

Why seg dice loss never goes down?

The Retina UNet's seg dice loss never goes down while training
image

And the validation segmentation predictions are always messed up like below row
image

No DICOMS's found for file: Using LIDC dataset and LIDC-IDRI-processing tool

I am trying to preprocess the LIDC dataset but I am getting the following errors. Can anyone help me with this?

No DICOM's found for file: /LIDC-IDRI-Preprocessing/LIDC_Dataset/XML_Data/LIDC-XML-only/tcia-lidc-xml/186/075.xml
/LIDC-IDRI-Preprocessing/LIDC_Dataset/XML_Data/LIDC-XML-only/tcia-lidc-xml/186/049.xml
/LIDC-IDRI-Preprocessing/LIDC_Dataset/XML_Data/LIDC-XML-only/tcia-lidc-xml/186/049.xml
1.3.6.1.4.1.14519.5.2.1.6279.6001.212697393127299815450339637649 1.3.6.1.4.1.14519.5.2.1.6279.6001.410251741986998833890312367579
[]
No DICOM's found for file: /LIDC-IDRI-Preprocessing/LIDC_Dataset/XML_Data/LIDC-XML-only/tcia-lidc-xml/186/049.xml
(CMAKE_QT) dv00:/LIDC-IDRI-Preprocessing/LIDC-IDRI-processing$

Preparaing the dataset

Hello,

I went through your docs, the work seems to be very helpful and I wish to try it out. Also, I would like to ask that is it necessary to use the script written by your friend to convert data into necessary formats.
What if I already have LIDC-IDRI dataset segmented. Can I not use it?

Please shed some light on the pre-processing work.

Showing Pred+GT at lowest row of pred_example.png

Hello Sir Paul, thank you for your awesome commit that resolve all previous issues.
May I ask, based on readme.md and the paper, it was shown that the wcs prediction will be plotted on the lowest row of the pred_example.png, however in my case (before and after recent commit), prediction box (blue box) weren't plotted. Are there another codes needed to be added to plotting.py in order to enable pred/gt boxes on the lowest image row?
Thanks in advance

Using the LIDC dataset

It looks like you have done some additional preprocessing on the LIDC dataset prior to running preprocessing.py, such as saving the data to a different format from the original DICOM files and making a csv file, characteristics.csv, with metadata in the xml files. Would you be able to share the code used on the raw data as downloaded from the source (https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)? Thanks.

Link error with roi_align and nms with Visual Studio 2017

When calling build.py for "roi_align", I am getting the following link error with Visual Studio 2017.
I am not sure why, TH.h contains only template definitions, so there shouldn't be any library dependency right? Same problem with nms.

Creating library .\Release\_crop_and_resize.lib and object .\Release\_crop_and_resize.exp LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library crop_and_resize.obj : error LNK2001: unresolved external symbol __imp_THFloatTensor_zero crop_and_resize.obj : error LNK2001: unresolved external symbol __imp_THFloatTensor_data crop_and_resize.obj : error LNK2001: unresolved external symbol __imp_THFloatTensor_resize4d crop_and_resize.obj : error LNK2001: unresolved external symbol __imp_THFloatTensor_size [....]

The linker call is:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0/lib/x64" "/LIBPATH:C:\Program Files\NVIDIA Corporation\NvToolsExt\/lib/x64" /LIBPATH:D:\stewun\mdf\medicaldetectiontoolkit\venv\lib\site-packages\torch\utils\ffi\..\..\lib /LIBPATH:D:\stewun\mdf\medicaldetectiontoolkit\venv\libs /LIBPATH:D:\stewun\WinPython3.6.5\python-3.6.5.amd64\libs /LIBPATH:D:\stewun\WinPython3.6.5\python-3.6.5.amd64 /LIBPATH:D:\stewun\mdf\medicaldetectiontoolkit\venv\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\ATLMFC\lib\x64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\um\x64" /EXPORT:PyInit__crop_and_resize .\Release\_crop_and_resize.obj .\Release\stewun\mdf\medicaldetectiontoolkit\cuda_functions\roi_align_3D\roi_align\src\crop_and_resize.obj .\Release\stewun\mdf\medicaldetectiontoolkit\cuda_functions\roi_align_3D\roi_align\src\crop_and_resize_gpu.obj D:\stewun\mdf\medicaldetectiontoolkit\cuda_functions\roi_align_3D\roi_align\src/cuda/crop_and_resize_kernel.cu.o /OUT:.\_crop_and_resize.pyd /IMPLIB:.\Release\_crop_and_resize.lib

LIDC dataloader ground truth

Hi, I was wondering how to load ground truth objects if I use LIDS dataloader code.
As I understood from the code of preprocessing.py and Readme, each object is loaded as a bounding box.
I have multiple objects in my image, which I want to segment. Does it mean, that I should have separate .nii.gz file for each object?
How the folder structure will look like?

Btw, I want to use 3D MRCNN, inputs were originally tif files, which I transformed to nrrd.

Thank you.

Issue while testing

Hi,

First thanks for this outstanding work! I managed to train the retina_Unet model on my patients, using the data loader from the toynet example were inputs are also 3D npy array with shape 128,128,256. I trained patients both using holding test set ==True and False and both results seem promising as shown with the plotting examples.

Nevertheless I can't manage to make the prediction on the testing cohort as this error appear, and I don't know where it may come from.
(MDK) paulbd@host:~/medicaldetectiontoolkit$ python exec.py --mode test --exp_dir results/test_AUJ3 /home/paulbd/anaconda3/envs/MDK/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp Logging to results/test_AUJ3/fold_0/exec.log starting testing model of fold 0 in exp results/test_AUJ3 feature map shapes: [[32 32 64] [16 16 32] [ 8 8 16] [ 4 4 8]] anchor scales: {'xy': [[8], [16], [32], [64]], 'z': [[2], [4], [8], [16]]} level 0: built anchors (196608, 6) / expected anchors 196608 ||| total build (196608, 6) / total expected 224640 level 1: built anchors (24576, 6) / expected anchors 24576 ||| total build (221184, 6) / total expected 224640 level 2: built anchors (3072, 6) / expected anchors 3072 ||| total build (224256, 6) / total expected 224640 level 3: built anchors (384, 6) / expected anchors 384 ||| total build (224640, 6) / total expected 224640 using default pytorch weight init data set loaded with: 97 test patients from /home/paulbd/medicaldetectiontoolkit/GAINED_dataset/test tmp ensembling over rank_ix:0 epoch:results/test_AUJ3/fold_0/2_best_params.pth check patient data loader (1, 1, 128, 128, 256) (1, 1, 128, 128, 256) evaluating patient 0 for fold 0 forwarding (patched) patient with shape: (1, 1, 128, 128, 256) Traceback (most recent call last): File "exec.py", line 185, in <module> test(logger) File "exec.py", line 121, in test test_results_list = test_predictor.predict_test_set(batch_gen, return_results=True) File "/home/paulbd/medicaldetectiontoolkit/predictor.py", line 159, in predict_test_set results_dict = self.predict_patient(batch) File "/home/paulbd/medicaldetectiontoolkit/predictor.py", line 102, in predict_patient results_dict = self.data_aug_forward(batch) File "/home/paulbd/medicaldetectiontoolkit/predictor.py", line 292, in data_aug_forward results_list = [self.spatial_tiling_forward(batch, patch_crops)] File "/home/paulbd/medicaldetectiontoolkit/predictor.py", line 448, in spatial_tiling_forward results_dict = self.batch_tiling_forward(batch) File "/home/paulbd/medicaldetectiontoolkit/predictor.py", line 483, in batch_tiling_forward results_dict = self.net.test_forward(batch, return_masks=self.cf.return_masks_in_test) File "/home/paulbd/medicaldetectiontoolkit/models/tmp_model.py", line 975, in test_forward _, _, _, detections, detection_masks = self.forward(img) File "/home/paulbd/medicaldetectiontoolkit/models/tmp_model.py", line 1008, in forward batch_rpn_rois, batch_proposal_boxes = proposal_layer(rpn_pred_probs, rpn_pred_deltas, proposal_count, self.anchors, self.cf) File "/home/paulbd/medicaldetectiontoolkit/models/tmp_model.py", line 333, in proposal_layer anchors = anchors[order, :] RuntimeError: index 898557 is out of bounds for dimension 0 with size 224640

Thanks for your help and again congratulations.

Best regards

Paul

Float16 tensors converted back to Float32 when passed to GPU

Hi Paul,

I noticed that in the data loaders of the LIDC and toy experiments, the data was loaded as float16 tensors.

However, in all models, the data is allocated to a GPU, it seems to be converted back to Float32 as .float() always precede .cuda(). For example:

Retina U-Net
https://github.com/pfjaeger/medicaldetectiontoolkit/blob/2b343ba974a6936d10bc7ba2d571db28f0760df5/models/retina_unet.py#L398

Mask RCNN
https://github.com/pfjaeger/medicaldetectiontoolkit/blob/2b343ba974a6936d10bc7ba2d571db28f0760df5/models/mrcnn.py#L864

UFRCNN
https://github.com/pfjaeger/medicaldetectiontoolkit/blob/2b343ba974a6936d10bc7ba2d571db28f0760df5/models/ufrcnn.py#L822

Strange thing is, when training with Mask RCNN, I can double the batch_size before triggering a CUDA: out-of-memory error when the data is loaded as Float16 compared to Float32.

I didn't test this for the other models, but do you have an idea about:

  1. does the models use Float16 tensors in some places I couldn't spot?
  2. do you get the the same behaviour regarding batch_size and CUDA OOM errors when switching from Float16 to Float32?
  3. If the answer to 1 is no, what would cause the above mentioned behaviour?

Thanks!

Error During Transferring Pre-trained RetinaNet LIDC Model

Hello Sir Paul, thank you for the suggestion you give me
I have tried to transfer the LIDC pre-trained weight, however I face several issue:

Traceback (most recent call last):
  File "exec.py", line 167, in <module>
    train(logger)
  File "exec.py", line 46, in train
    starting_epoch = utils.load_checkpoint(cf.resume_to_checkpoint, net, optimizer)
  File "/home/ivan/retcustom3d/utils/exp_utils.py", line 186, in load_checkpoint
    net.load_state_dict(checkpoint['state_dict'])
  File "/home/ivan/.virtualenvs/virtual-py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for net:
        Missing key(s) in state_dict: "Fpn.C1.0.weight", "Fpn.C1.0.bias", "Fpn.C2.1.conv1.0.weight", "Fpn.C2.1.conv1.0.bias", "Fpn.C2.1.conv2.0.weight", "Fpn.C2.1.conv2.0.bias", "Fpn.C2.1.conv3.weight", "Fpn.C2.1.conv3.bias", "Fpn.C2.1.downsample.weight", "Fpn.C2.1.downsample.bias", "Fpn.C2.2.conv1.0.weight", "Fpn.C2.2.conv1.0.bias", "Fpn.C2.2.conv2.0.weight", "Fpn.C2.2.conv2.0.bias", "Fpn.C2.2.conv3.weight", "Fpn.C2.2.conv3.bias", "Fpn.C2.3.conv1.0.weight", "Fpn.C2.3.conv1.0.bias", "Fpn.C2.3.conv2.0.weight", "Fpn.C2.3.conv2.0.bias", "Fpn.C2.3.conv3.weight", "Fpn.C2.3.conv3.bias", "Fpn.C3.0.conv1.0.weight", "Fpn.C3.0.conv1.0.bias", "Fpn.C3.0.conv2.0.weight", "Fpn.C3.0.conv2.0.bias", "Fpn.C3.0.conv3.weight", "Fpn.C3.0.conv3.bias", "Fpn.C3.0.downsample.weight", "Fpn.C3.0.downsample.bias", "Fpn.C3.1.conv1.0.weight", "Fpn.C3.1.conv1.0.bias", "Fpn.C3.1.conv2.0.weight", "Fpn.C3.1.conv2.0.bias", "Fpn.C3.1.conv3.weight", "Fpn.C3.1.conv3.bias", "Fpn.C3.2.conv1.0.weight", "Fpn.C3.2.conv1.0.bias", "Fpn.C3.2.conv2.0.weight", "Fpn.C3.2.conv2.0.bias", "Fpn.C3.2.conv3.weight", "Fpn.C3.2.conv3.bias", "Fpn.C3.3.conv1.0.weight", "Fpn.C3.3.conv1.0.bias", "Fpn.C3.3.conv2.0.weight", "Fpn.C3.3.conv2.0.bias", "Fpn.C3.3.conv3.weight", "Fpn.C3.3.conv3.bias", "Fpn.C4.0.conv1.0.weight", "Fpn.C4.0.conv1.0.bias", "Fpn.C4.0.conv2.0.weight", "Fpn.C4.0.conv2.0.bias", "Fpn.C4.0.conv3.weight", "Fpn.C4.0.conv3.bias", "Fpn.C4.0.downsample.weight", "Fpn.C4.0.downsample.bias", "Fpn.C4.1.conv1.0.weight", "Fpn.C4.1.conv1.0.bias", "Fpn.C4.1.conv2.0.weight", "Fpn.C4.1.conv2.0.bias", "Fpn.C4.1.conv3.weight", "Fpn.C4.1.conv3.bias", "Fpn.C4.2.conv1.0.weight", "Fpn.C4.2.conv1.0.bias", "Fpn.C4.2.conv2.0.weight", "Fpn.C4.2.conv2.0.bias", "Fpn.C4.2.conv3.weight", "Fpn.C4.2.conv3.bias", "Fpn.C4.3.conv1.0.weight", "Fpn.C4.3.conv1.0.bias", "Fpn.C4.3.conv2.0.weight", "Fpn.C4.3.conv2.0.bias", "Fpn.C4.3.conv3.weight", "Fpn.C4.3.conv3.bias", "Fpn.C4.4.conv1.0.weight", "Fpn.C4.4.conv1.0.bias", "Fpn.C4.4.conv2.0.weight", "Fpn.C4.4.conv2.0.bias", "Fpn.C4.4.conv3.weight", "Fpn.C4.4.conv3.bias", "Fpn.C4.5.conv1.0.weight", "Fpn.C4.5.conv1.0.bias", "Fpn.C4.5.conv2.0.weight", "Fpn.C4.5.conv2.0.bias", "Fpn.C4.5.conv3.weight", "Fpn.C4.5.conv3.bias", "Fpn.C5.0.conv1.0.weight", "Fpn.C5.0.conv1.0.bias", "Fpn.C5.0.conv2.0.weight", "Fpn.C5.0.conv2.0.bias", "Fpn.C5.0.conv3.weight", "Fpn.C5.0.conv3.bias", "Fpn.C5.0.downsample.weight", "Fpn.C5.0.downsample.bias", "Fpn.C5.1.conv1.0.weight", "Fpn.C5.1.conv1.0.bias", "Fpn.C5.1.conv2.0.weight", "Fpn.C5.1.conv2.0.bias", "Fpn.C5.1.conv3.weight", "Fpn.C5.1.conv3.bias", "Fpn.C5.2.conv1.0.weight", "Fpn.C5.2.conv1.0.bias", "Fpn.C5.2.conv2.0.weight", "Fpn.C5.2.conv2.0.bias", "Fpn.C5.2.conv3.weight", "Fpn.C5.2.conv3.bias", "Fpn.P5_conv1.weight", "Fpn.P5_conv1.bias", "Fpn.P4_conv1.weight", "Fpn.P4_conv1.bias", "Fpn.P3_conv1.weight", "Fpn.P3_conv1.bias", "Fpn.P2_conv1.weight", "Fpn.P2_conv1.bias", "Fpn.P1_conv1.weight", "Fpn.P1_conv1.bias", "Fpn.P1_conv2.weight", "Fpn.P1_conv2.bias", "Fpn.P2_conv2.weight", "Fpn.P2_conv2.bias", "Fpn.P3_conv2.weight", "Fpn.P3_conv2.bias", "Fpn.P4_conv2.weight", "Fpn.P4_conv2.bias", "Fpn.P5_conv2.weight", "Fpn.P5_conv2.bias".
        Unexpected key(s) in state_dict: "fpn.C1.0.weight", "fpn.C1.0.bias", "fpn.C2.1.conv1.0.weight", "fpn.C2.1.conv1.0.bias", "fpn.C2.1.conv2.0.weight", "fpn.C2.1.conv2.0.bias", "fpn.C2.1.conv3.weight", "fpn.C2.1.conv3.bias", "fpn.C2.1.downsample.0.weight", "fpn.C2.1.downsample.0.bias", "fpn.C2.2.conv1.0.weight", "fpn.C2.2.conv1.0.bias", "fpn.C2.2.conv2.0.weight", "fpn.C2.2.conv2.0.bias", "fpn.C2.2.conv3.weight", "fpn.C2.2.conv3.bias", "fpn.C2.3.conv1.0.weight", "fpn.C2.3.conv1.0.bias", "fpn.C2.3.conv2.0.weight", "fpn.C2.3.conv2.0.bias", "fpn.C2.3.conv3.weight", "fpn.C2.3.conv3.bias", "fpn.C3.0.conv1.0.weight", "fpn.C3.0.conv1.0.bias", "fpn.C3.0.conv2.0.weight", "fpn.C3.0.conv2.0.bias", "fpn.C3.0.conv3.weight", "fpn.C3.0.conv3.bias", "fpn.C3.0.downsample.0.weight", "fpn.C3.0.downsample.0.bias", "fpn.C3.1.conv1.0.weight", "fpn.C3.1.conv1.0.bias", "fpn.C3.1.conv2.0.weight", "fpn.C3.1.conv2.0.bias", "fpn.C3.1.conv3.weight", "fpn.C3.1.conv3.bias", "fpn.C3.2.conv1.0.weight", "fpn.C3.2.conv1.0.bias", "fpn.C3.2.conv2.0.weight", "fpn.C3.2.conv2.0.bias", "fpn.C3.2.conv3.weight", "fpn.C3.2.conv3.bias", "fpn.C3.3.conv1.0.weight", "fpn.C3.3.conv1.0.bias", "fpn.C3.3.conv2.0.weight", "fpn.C3.3.conv2.0.bias", "fpn.C3.3.conv3.weight", "fpn.C3.3.conv3.bias", "fpn.C4.0.conv1.0.weight", "fpn.C4.0.conv1.0.bias", "fpn.C4.0.conv2.0.weight", "fpn.C4.0.conv2.0.bias", "fpn.C4.0.conv3.weight", "fpn.C4.0.conv3.bias", "fpn.C4.0.downsample.0.weight", "fpn.C4.0.downsample.0.bias", "fpn.C4.1.conv1.0.weight", "fpn.C4.1.conv1.0.bias", "fpn.C4.1.conv2.0.weight", "fpn.C4.1.conv2.0.bias", "fpn.C4.1.conv3.weight", "fpn.C4.1.conv3.bias", "fpn.C4.2.conv1.0.weight", "fpn.C4.2.conv1.0.bias", "fpn.C4.2.conv2.0.weight", "fpn.C4.2.conv2.0.bias", "fpn.C4.2.conv3.weight", "fpn.C4.2.conv3.bias", "fpn.C4.3.conv1.0.weight", "fpn.C4.3.conv1.0.bias", "fpn.C4.3.conv2.0.weight", "fpn.C4.3.conv2.0.bias", "fpn.C4.3.conv3.weight", "fpn.C4.3.conv3.bias", "fpn.C4.4.conv1.0.weight", "fpn.C4.4.conv1.0.bias", "fpn.C4.4.conv2.0.weight", "fpn.C4.4.conv2.0.bias", "fpn.C4.4.conv3.weight", "fpn.C4.4.conv3.bias", "fpn.C4.5.conv1.0.weight", "fpn.C4.5.conv1.0.bias", "fpn.C4.5.conv2.0.weight", "fpn.C4.5.conv2.0.bias", "fpn.C4.5.conv3.weight", "fpn.C4.5.conv3.bias", "fpn.C5.0.conv1.0.weight", "fpn.C5.0.conv1.0.bias", "fpn.C5.0.conv2.0.weight", "fpn.C5.0.conv2.0.bias", "fpn.C5.0.conv3.weight", "fpn.C5.0.conv3.bias", "fpn.C5.0.downsample.0.weight", "fpn.C5.0.downsample.0.bias", "fpn.C5.1.conv1.0.weight", "fpn.C5.1.conv1.0.bias", "fpn.C5.1.conv2.0.weight", "fpn.C5.1.conv2.0.bias", "fpn.C5.1.conv3.weight", "fpn.C5.1.conv3.bias", "fpn.C5.2.conv1.0.weight", "fpn.C5.2.conv1.0.bias", "fpn.C5.2.conv2.0.weight", "fpn.C5.2.conv2.0.bias", "fpn.C5.2.conv3.weight", "fpn.C5.2.conv3.bias", "fpn.P5_conv1.weight", "fpn.P5_conv1.bias", "fpn.P4_conv1.weight", "fpn.P4_conv1.bias", "fpn.P3_conv1.weight", "fpn.P3_conv1.bias", "fpn.P2_conv1.weight", "fpn.P2_conv1.bias", "fpn.P1_conv1.weight", "fpn.P1_conv1.bias", "fpn.P1_conv2.weight", "fpn.P1_conv2.bias", "fpn.P2_conv2.weight", "fpn.P2_conv2.bias", "fpn.P3_conv2.weight", "fpn.P3_conv2.bias", "fpn.P4_conv2.weight", "fpn.P4_conv2.bias", "fpn.P5_conv2.weight", "fpn.P5_conv2.bias".

For this capslock incompatibility I edit self.Fpn at retinanet.py to self.fpn, and it work for conv layer, however the next one occur for downsample weight and bias

Traceback (most recent call last):
  File "exec.py", line 167, in <module>
    train(logger)
  File "exec.py", line 46, in train
      starting_epoch = utils.load_checkpoint(cf.resume_to_checkpoint, net, optimizer)
  File "/home/ivan/retcustom3d/utils/exp_utils.py", line 186, in load_checkpoint
    net.load_state_dict(checkpoint['state_dict'])
  File "/home/ivan/.virtualenvs/virtual-py3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for net:
        Missing key(s) in state_dict: "fpn.C2.1.downsample.bias", "fpn.C2.1.downsample.weight", "fpn.C3.0.downsample.bias", "fpn.C3.0.downsample.weight", "fpn.C4.0.downsample.bias", "fpn.C4.0.downsample.weight", "fpn.C5.0.downsample.bias", "fpn.C5.0.downsample.weight".
        Unexpected key(s) in state_dict: "fpn.C2.1.downsample.0.weight", "fpn.C2.1.downsample.0.bias", "fpn.C3.0.downsample.0.weight", "fpn.C3.0.downsample.0.bias", "fpn.C4.0.downsample.0.weight", "fpn.C4.0.downsample.0.bias", "fpn.C5.0.downsample.0.weight", "fpn.C5.0.downsample.0.bias".

Solution

  1. adding variable self.downsample --> self.downsample.0 (failed)
  2. set torch.load_state_dict (strict = False) --> able to run from epoch 100, however index out of range for train or train_test mode
starting training epoch 100

tr. batch 1/50 (ep. 100) fw 7.221s / bw 2.101s / total 9.322s || loss: 3.70, class: 3.50, bbox: 0.20
tr. batch 2/50 (ep. 100) fw 6.843s / bw 2.012s / total 8.855s || loss: 1.11, class: 0.93, bbox: 0.18
tr. batch 3/50 (ep. 100) fw 5.348s / bw 2.008s / total 7.356s || loss: 0.90, class: 0.71, bbox: 0.19
tr. batch 4/50 (ep. 100) fw 5.678s / bw 2.025s / total 7.703s || loss: 1.55, class: 1.27, bbox: 0.28
tr. batch 5/50 (ep. 100) fw 6.815s / bw 2.021s / total 8.837s || loss: 0.95, class: 0.77, bbox: 0.18
tr. batch 6/50 (ep. 100) fw 7.942s / bw 2.021s / total 9.963s || loss: 1.37, class: 1.16, bbox: 0.21
tr. batch 7/50 (ep. 100) fw 8.728s / bw 2.048s / total 10.776s || loss: 1.41, class: 1.07, bbox: 0.35
tr. batch 8/50 (ep. 100) fw 7.288s / bw 2.026s / total 9.314s || loss: 1.57, class: 1.30, bbox: 0.27
tr. batch 9/50 (ep. 100) fw 8.727s / bw 2.015s / total 10.742s || loss: 1.15, class: 0.97, bbox: 0.18
tr. batch 10/50 (ep. 100) fw 8.047s / bw 2.003s / total 10.050s || loss: 0.79, class: 0.70, bbox: 0.09
tr. batch 11/50 (ep. 100) fw 5.477s / bw 2.022s / total 7.499s || loss: 0.96, class: 0.84, bbox: 0.12
tr. batch 12/50 (ep. 100) fw 7.219s / bw 2.012s / total 9.231s || loss: 1.04, class: 0.94, bbox: 0.09
tr. batch 13/50 (ep. 100) fw 7.120s / bw 2.029s / total 9.149s || loss: 0.95, class: 0.77, bbox: 0.18
tr. batch 14/50 (ep. 100) fw 6.659s / bw 2.022s / total 8.681s || loss: 0.82, class: 0.72, bbox: 0.10
tr. batch 15/50 (ep. 100) fw 6.781s / bw 2.006s / total 8.787s || loss: 0.88, class: 0.74, bbox: 0.13
tr. batch 16/50 (ep. 100) fw 7.215s / bw 2.011s / total 9.226s || loss: 1.09, class: 0.94, bbox: 0.16
tr. batch 17/50 (ep. 100) fw 4.831s / bw 1.996s / total 6.828s || loss: 0.77, class: 0.67, bbox: 0.10
tr. batch 18/50 (ep. 100) fw 7.607s / bw 2.024s / total 9.631s || loss: 0.99, class: 0.79, bbox: 0.20
tr. batch 19/50 (ep. 100) fw 7.291s / bw 2.018s / total 9.310s || loss: 1.11, class: 0.87, bbox: 0.23
tr. batch 20/50 (ep. 100) fw 7.864s / bw 2.029s / total 9.893s || loss: 0.87, class: 0.68, bbox: 0.19
tr. batch 21/50 (ep. 100) fw 6.009s / bw 2.059s / total 8.068s || loss: 0.89, class: 0.74, bbox: 0.14
tr. batch 22/50 (ep. 100) fw 6.142s / bw 2.020s / total 8.162s || loss: 1.05, class: 0.88, bbox: 0.17
tr. batch 23/50 (ep. 100) fw 5.101s / bw 2.002s / total 7.103s || loss: 0.79, class: 0.68, bbox: 0.11
tr. batch 24/50 (ep. 100) fw 7.096s / bw 2.020s / total 9.116s || loss: 0.99, class: 0.80, bbox: 0.18
tr. batch 25/50 (ep. 100) fw 7.429s / bw 1.994s / total 9.423s || loss: 0.86, class: 0.67, bbox: 0.19
tr. batch 26/50 (ep. 100) fw 5.912s / bw 2.019s / total 7.932s || loss: 0.94, class: 0.79, bbox: 0.15
tr. batch 27/50 (ep. 100) fw 7.303s / bw 2.010s / total 9.313s || loss: 1.06, class: 0.78, bbox: 0.28
tr. batch 28/50 (ep. 100) fw 6.317s / bw 2.002s / total 8.320s || loss: 0.64, class: 0.59, bbox: 0.05
tr. batch 29/50 (ep. 100) fw 8.409s / bw 2.010s / total 10.419s || loss: 0.82, class: 0.68, bbox: 0.14
tr. batch 30/50 (ep. 100) fw 6.803s / bw 2.004s / total 8.807s || loss: 0.77, class: 0.68, bbox: 0.09
tr. batch 31/50 (ep. 100) fw 7.824s / bw 2.009s / total 9.833s || loss: 0.77, class: 0.63, bbox: 0.14
tr. batch 32/50 (ep. 100) fw 8.600s / bw 2.002s / total 10.602s || loss: 0.86, class: 0.74, bbox: 0.12
tr. batch 33/50 (ep. 100) fw 10.036s / bw 2.046s / total 12.082s || loss: 0.99, class: 0.79, bbox: 0.19
tr. batch 34/50 (ep. 100) fw 8.687s / bw 2.028s / total 10.715s || loss: 0.91, class: 0.75, bbox: 0.16
tr. batch 35/50 (ep. 100) fw 9.219s / bw 2.025s / total 11.244s || loss: 0.90, class: 0.70, bbox: 0.20
tr. batch 36/50 (ep. 100) fw 8.416s / bw 2.115s / total 10.532s || loss: 1.10, class: 0.87, bbox: 0.23
tr. batch 37/50 (ep. 100) fw 5.734s / bw 2.007s / total 7.741s || loss: 0.70, class: 0.58, bbox: 0.13
tr. batch 38/50 (ep. 100) fw 7.441s / bw 2.006s / total 9.447s || loss: 0.57, class: 0.51, bbox: 0.06
tr. batch 39/50 (ep. 100) fw 7.173s / bw 2.025s / total 9.198s || loss: 1.00, class: 0.83, bbox: 0.16
tr. batch 40/50 (ep. 100) fw 6.762s / bw 2.017s / total 8.778s || loss: 0.90, class: 0.72, bbox: 0.19
tr. batch 41/50 (ep. 100) fw 6.915s / bw 2.014s / total 8.929s || loss: 0.93, class: 0.77, bbox: 0.17
tr. batch 42/50 (ep. 100) fw 6.596s / bw 2.008s / total 8.604s || loss: 0.99, class: 0.85, bbox: 0.14
tr. batch 43/50 (ep. 100) fw 8.112s / bw 2.040s / total 10.152s || loss: 0.50, class: 0.44, bbox: 0.05
tr. batch 44/50 (ep. 100) fw 6.126s / bw 2.022s / total 8.148s || loss: 1.11, class: 0.86, bbox: 0.25
tr. batch 45/50 (ep. 100) fw 5.511s / bw 2.031s / total 7.542s || loss: 0.96, class: 0.75, bbox: 0.21
tr. batch 46/50 (ep. 100) fw 7.721s / bw 2.009s / total 9.730s || loss: 0.87, class: 0.71, bbox: 0.16
tr. batch 47/50 (ep. 100) fw 5.878s / bw 2.038s / total 7.916s || loss: 0.92, class: 0.74, bbox: 0.19
tr. batch 48/50 (ep. 100) fw 7.050s / bw 2.030s / total 9.080s || loss: 0.96, class: 0.79, bbox: 0.16
tr. batch 49/50 (ep. 100) fw 8.159s / bw 2.072s / total 10.231s || loss: 1.03, class: 0.87, bbox: 0.16
tr. batch 50/50 (ep. 100) fw 9.161s / bw 2.031s / total 11.191s || loss: 0.80, class: 0.70, bbox: 0.10
evaluating in mode train
evaluating with match_iou: 0.1
starting validation in mode val_sampling.
evaluating in mode val_sampling
evaluating with match_iou: 0.1
non none scores: [0.00000000e+00 2.64303791e-05]
Traceback (most recent call last):
  File "exec.py", line 167, in <module>
    train(logger)
  File "exec.py", line 102, in train
    TrainingPlot.update_and_save(monitor_metrics, epoch)
  File "/home/ivan/retcustom3d/plotting.py", line 189, in update_and_save
    self.do_validation)
  File "/home/ivan/retcustom3d/plotting.py", line 195, in detection_monitoring_plot
    monitor_values_keys = metrics['train']['monitor_values'][1][0].keys()
IndexError: list index out of range

Do you have some trick to supress the error Sir?
Thank you in advance Sir.

issue with ConvertSegToBoundingBoxCoordinates when multiple ROI per slice or volume

Hi,

it seems that ConvertSegToBoundingBoxCoordinates makes a single bounding box when multiple ROIs are on the same slice (or volume). The same problem occurs whether self.dim = 2 or 3 in configs file.

Indeed for now I load via np.load 3D arrays using the data_loader of the LIDC dataset.

  • 1 np.float32 for img of dim 128x128x256
  • 1 np.int16 mask of dim 128x128x256 where there may be from 30 to 100 lesions by volume. All lesions are labeled with ones.

eg for 1 slice from the pred example in plots:

capture d ecran 2018-12-27 a 10 24 51

Is there anything that can be done to avoid this ?

Best regards

Paul

cuda function crop_and_resize: different signatures for cpu and gpu versions

Hi Paul,

I was trying to get the code to run on cpu only for developing and testing on my local PC before training on a machine equipped with GPU(s).

Unfortunately, there is an issue with the following lines:

https://github.com/pfjaeger/medicaldetectiontoolkit/blob/695277d581128e210893fd0be5068868f21c9dfc/cuda_functions/roi_align_3D/roi_align/crop_and_resize.py#L21-L28

When called, the cpu-only function _backend.crop_and_resize_forward returns an error "crop_and_resize_forward expected 7 arguments, got 8".

This function is defined in the file ._crop_and_resize.so which is a compiled library so we don't access to the code.
Would you have access to the definition/signature of this function and be able to help resolve this issue ?

Thanks !

HELP!

In mrcnn.py ,I meet a error
File "models/mrcnn.py", line 24, in
from cuda_functions.nms_2D.pth_nms import nms_gpu as nms_2D
File "C:\medicaldetectiontoolkit-master\cuda_functions\nms_2D\pth_nms.py", line 2, in
from .ext import nms
File "C:\medicaldetectiontoolkit-master\cuda_functions\nms_2D_ext\nms_init
.py", line 3, in
from ._nms import lib as _lib, ffi as _ffi
ModuleNotFoundError: No module named 'cuda_functions.nms_2D._ext.nms._nms',
could you help me solve it?

mrcnn.py error

Traceback (most recent call last):
File "/home/wgs/medicaldetectiontoolkit/exec.py", line 167, in
train(logger)
File "/home/wgs/medicaldetectiontoolkit/exec.py", line 69, in train
results_dict = net.train_forward(batch)
File "models/mrcnn.py", line 873, in train_forward
rpn_class_logits, rpn_pred_deltas, proposal_boxes, detections, detection_masks = self.forward(img)
File "models/mrcnn.py", line 990, in forward
fpn_outs = self.fpn(img)
File "/home/wgs/medicaldetectiontoolkit/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "models/backbone.py", line 153, in forward
p2_pre_out = self.P2_conv1(c2_out) + F.interpolate(p3_pre_out, scale_factor=2)
RuntimeError: The size of tensor a (39) must match the size of tensor b (40) at non-singleton dimension 3

For LIDC, do you use only one scan per patient?

In several places in lidc_exp the patient id pid is used as a unique identifier for a scan, for example as a dictionary key. pid also used to generate train/val/test splits within data_loader.get_train_generators. Since there are fewer patients than scans in the LIDC dataset it looks like either a new id has been created for each scan or that you have kept only one scan per patient. If you have created new ids then the splits would not be on a patient level so the same patient could appear in different splits. Could you clarify this?

No dataset loaded in val

Hi, I am trying the toynet expermiment which is running for one epoch, but no patients seems to be loaded in the validation dataset :
data set loaded with: 50 train / 0 val patients

which leads to this error :

WARNING:matplotlib.legend:No handles with labels found to put in legend. /home/paulbd/anaconda3/envs/MDK/lib/python3.6/site-packages/pandas/core/ops.py:1167: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison result = method(y) Traceback (most recent call last): File "exec.py", line 167, in <module> train(logger) File "exec.py", line 98, in train _, monitor_metrics['val'] = val_evaluator.evaluate_predictions(val_results_list, monitor_metrics['val']) File "/home/paulbd/medicaldetectiontoolkit/evaluator.py", line 191, in evaluate_predictions return self.return_metrics(monitor_metrics) File "/home/paulbd/medicaldetectiontoolkit/evaluator.py", line 218, in return_metrics spec_df = cl_df[cl_df.det_type != 'patient_tn'] File "/home/paulbd/anaconda3/envs/MDK/lib/python3.6/site-packages/pandas/core/ops.py", line 1283, in wrapper res = na_op(values, other) File "/home/paulbd/anaconda3/envs/MDK/lib/python3.6/site-packages/pandas/core/ops.py", line 1169, in na_op raise TypeError("invalid type comparison") TypeError: invalid type comparison

And I can't figure out which parameters to change in the config (self.do_validation=True,...) , as I have read that it was automatically splitted into 20% of the training cohort and therefore don't understand why it doesn't work.

Best regards and thanks for your help,

Paul

Not working: CropAndResizeFunction3D

Hi,

I am using pytorch 0.4.0 with python 3.6.7. I got error when I test the function CropAndResizeFunction with the following code:

import torch
from cuda_functions.roi_align_3D.roi_align.crop_and_resize import CropAndResizeFunction as ra3D
import numpy as np

roi_layer = ra3D(2,2,2,0)

subv = np.ones((1, 1, 100, 100, 100)).astype(np.float32) # [B,C,Y,X,Z]
subv[:,:,:50,:50,:] = 0.0
box = np.array([
[
20.0/subv.shape[2], 20.0/subv.shape[3],
30.0/subv.shape[2], 30.0/subv.shape[3],
0.00/subv.shape[4], 100./subv.shape[4]
]]) # Y1, X1, Y2, X2, Z1, Z2
batch_ix = np.array([0])

im_blob = torch.from_numpy(subv).cuda()
im_box = torch.from_numpy(box).cuda()
ind = torch.from_numpy(batch_ix).int().cuda()

roi_cuda = roi_layer.forward(im_blob, im_box, ind)

print(roi_cuda)

The output is
tensor([[[[[1., 1.],
[1., 1.]],
[[1., 1.],
[1., 1.]]]]], device='cuda:0'

However, I think the output should be all zeros. Am I use the function correctly?

why retinanet outperforms maskrcnn for lung lesion detection?

Hi,
I've read your paper and I'm wondering why RetinaNet works better than MaskRCNN for lung lesion detection with 2D or 2Dc inputs. I think in most cases, a two-stage detector should be better than one-stage with the same backbone. Have you ever investigate this? Or did I miss any details?
Thanks

Toy dataset generation: writing to file isn't thread safe

Hi !

First, thanks this great work. This implementation of the object detectors is very clean, well organised and one of the best I've seen so far.

I think there is an issue with the way processes write content in a dataframe when generating toy dataset: After generating and writing 1500 images to disk, the associated info_df.pickle dataframe only contains ~950 samples.

I believe the misbehaviour comes from:

https://github.com/pfjaeger/medicaldetectiontoolkit/blob/695277d581128e210893fd0be5068868f21c9dfc/experiments/toy_exp/generate_toys.py#L53-L55

If two processes read the pickle file simultaneously, the dataframes will have the same length for both. Then, both process will write a row with the same index: df.loc[len(df)] = .... Ultimately, one of the two rows is lost when the second process writes its dataframe back to disk.

Retina U-Net Segmentation Loss Not Decreasing

Hi, thank your for this great repository!
I encountered some problems during training Retina U-Net, similar to a previous issue. When Instance/Batch-norm is activated, the segmentation loss does not decrease and the segmentation does not work properly. This problem does not occur when no normalisation is used. After some digging into the code, I found a normalisation layer after the final convolution of the segmentation layer of Retina U-Net.

https://github.com/pfjaeger/medicaldetectiontoolkit/blob/2b343ba974a6936d10bc7ba2d571db28f0760df5/models/retina_unet.py#L378

I think there should be no normalisation of the segmentation logits (because it implicitly limits the output prediction) or did I miss something?

How to multi-gpu parallel training?

The net use custom training method, so DataParallel can't not work for multi gpu training
The default model use only one gpu when training, even though CUDA VISIBLE is multi-gpu

pretrained models

Are there any pretrained model available? especially 3D models. T.I.A.

How to setup/convert a custom 3D dataset?

Hello!

I was wondering how well could the 3D Mask-RCNN approach work on my 3D dataset (nifti). So far I was not able to find any guide or explanations of how could I apply it on a dataset that is not LIDC. Could You please explain/guide me how to convert my nifti data to have the appropriate input for the 3D Mask-RCNN?

Thank You!

help!

Hi, I could not find the dataset of lung, could you tell me use it?

list index out of range while testing.

Hi,
I met an error like following while testing after training with 4ch dataset.
I think I need to change something in testing code, but I don't know exactly. I guess it's due to mismatch between 4ch data dimension and patch size dimension. could you comment for this?
my 4ch dataset dim is (4, 240,240,155).
Logging to ./brats_tc/fold_0/exec.log
starting testing model of fold 0 in exp ./brats_tc
feature map shapes: [[32 32 64]
[16 16 32]
[ 8 8 16]
[ 4 4 8]]
anchor scales: {'z': [[2], [4], [8], [16]], 'xy': [[8], [16], [32], [64]]}
level 0: built anchors (196608, 6) / expected anchors 196608 ||| total build (196608, 6) / total expected 224640
level 1: built anchors (24576, 6) / expected anchors 24576 ||| total build (221184, 6) / total expected 224640
level 2: built anchors (3072, 6) / expected anchors 3072 ||| total build (224256, 6) / total expected 224640
level 3: built anchors (384, 6) / expected anchors 384 ||| total build (224640, 6) / total expected 224640
using default pytorch weight init
subset: selected 57 instances from df
data set loaded with: 57 test patients
tmp ensembling over rank_ix:0 epoch:./brats_tc/fold_0/114_best_params.pth
Traceback (most recent call last):
File "exec.py", line 190, in
test(logger)
File "exec.py", line 126, in test
test_results_list = test_predictor.predict_test_set(batch_gen, return_results=True)
File "/data2/project/keras/medicaldetectiontoolkit/predictor.py", line 149, in predict_test_set
batch = next(batch_gen['test'])
File "/data2/project/keras/batchgenerators/batchgenerators/dataloading/data_loader.py", line 123, in next
return self.generate_train_batch()
File "experiments/brats_tc/data_loader.py", line 381, in generate_train_batch
patch_crop_coords_list = dutils.get_patch_crop_coords(data, self.patch_size)
File "/data2/project/keras/medicaldetectiontoolkit/utils/dataloader_utils.py", line 157, in get_patch_crop_coords
n_patches = int(np.ceil(img.shape[dim] / patch_size[dim]))
IndexError: list index out of range

Thank you,
David.

10 undefined names

flake8 testing of https://github.com/pfjaeger/medicaldetectiontoolkit on Python 3.7.1

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./models/retina_unet.py:255:75: F821 undefined name 'b_keep'
            b_keep = class_keep if i == 0 else mutils.unique1d(torch.cat((b_keep, class_keep)))
                                                                          ^
./models/retina_unet.py:261:71: F821 undefined name 'batch_keep'
        batch_keep = b_keep if j == 0 else mutils.unique1d(torch.cat((batch_keep, b_keep)))
                                                                      ^
./models/retina_net.py:255:75: F821 undefined name 'b_keep'
            b_keep = class_keep if i == 0 else mutils.unique1d(torch.cat((b_keep, class_keep)))
                                                                          ^
./models/retina_net.py:261:71: F821 undefined name 'batch_keep'
        batch_keep = b_keep if j == 0 else mutils.unique1d(torch.cat((batch_keep, b_keep)))
                                                                      ^
./models/ufrcnn.py:668:79: F821 undefined name 'b_keep'
                b_keep = class_keep if i == 0 else mutils.unique1d(torch.cat((b_keep, class_keep)))
                                                                              ^
./models/ufrcnn.py:675:75: F821 undefined name 'batch_keep'
            batch_keep = b_keep if j == 0 else mutils.unique1d(torch.cat((batch_keep, b_keep)))
                                                                          ^
./models/backbone.py:243:40: F821 undefined name 'c1_out'
            p1_pre_out = self.P1_conv1(c1_out) + self.P2_upsample(p2_pre_out)
                                       ^
./models/backbone.py:244:40: F821 undefined name 'c0_out'
            p0_pre_out = self.P0_conv1(c0_out) + self.P1_upsample(p1_pre_out)
                                       ^
./models/mrcnn.py:694:79: F821 undefined name 'b_keep'
                b_keep = class_keep if i == 0 else mutils.unique1d(torch.cat((b_keep, class_keep)))
                                                                              ^
./models/mrcnn.py:701:75: F821 undefined name 'batch_keep'
            batch_keep = b_keep if j == 0 else mutils.unique1d(torch.cat((batch_keep, b_keep)))
                                                                          ^
10    F821 undefined name 'c1_out'
10

HELP!

Hi,
There are something wrong that confused me when I re-compiled the cuda fuctions.
`(venv) xliu@310-aa:~/medicaldetectiontoolkit/cuda_functions/nms_2D$ python build .py
Including CUDA code.
/home/xliu/medicaldetectiontoolkit/cuda_functions/nms_2D
generating /tmp/tmpaxsbcnrt/_nms.c
setting the current directory to '/tmp/tmpaxsbcnrt'
running build_ext
building '_nms' extension
creating home
creating home/xliu
creating home/xliu/medicaldetectiontoolkit
creating home/xliu/medicaldetectiontoolkit/cuda_functions
creating home/xliu/medicaldetectiontoolkit/cuda_functions/nms_2D
creating home/xliu/medicaldetectiontoolkit/cuda_functions/nms_2D/src
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORT IFY_SOURCE=2 -fPIC -DWITH_CUDA -I/home/xliu/medicaldetectiontoolkit/venv/lib/pyt hon3.5/site-packages/torch/utils/ffi/../../lib/include -I/home/xliu/medicaldetec tiontoolkit/venv/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/T H -I/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/torch/ut ils/ffi/../../lib/include/THC -I/usr/include/python3.5m -I/home/xliu/medicaldete ctiontoolkit/venv/include/python3.5m -c _nms.c -o ./_nms.o -std=c99
In file included from /home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site -packages/torch/utils/ffi/../../lib/include/THC/THC.h:4:0,
from _nms.c:493:
/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/torch/utils/ ffi/../../lib/include/THC/THCGeneral.h:12:18: fatal error: cuda.h: No such file or directory
compilation terminated.

Traceback (most recent call last):
File "/usr/lib/python3.5/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/usr/lib/python3.5/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/usr/lib/python3.5/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/usr/lib/python3.5/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'x86_64-linux-gnu-gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/cffi /ffiplatform.py", line 51, in _build
dist.run_command('build_ext')
File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 338, in run
self.build_extensions()
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 447, in build_e xtensions
self._build_extensions_serial()
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 472, in build extensions_serial
self.build_extension(ext)
File "/usr/lib/python3.5/distutils/command/build_ext.py", line 532, in build_e xtension
depends=ext.depends)
File "/usr/lib/python3.5/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/usr/lib/python3.5/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'x86_64-linux-gnu-gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "build.py", line 34, in
ffi.build()
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/torc h/utils/ffi/init.py", line 189, in build
_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/torc h/utils/ffi/init.py", line 111, in _build_extension
outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/cffi /api.py", line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/cffi /recompiler.py", line 1520, in recompile
compiler_verbose, debug)
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/cffi /ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/home/xliu/medicaldetectiontoolkit/venv/lib/python3.5/site-packages/cffi /ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.error.VerificationError: CompileError: command 'x86_64-linux-gnu-gcc' failed with exit status 1

`

I have problems for 4channel dataset to run mask rcnn

Hi, pfjaeger.
I'm trying to run mask rcnn with Brats dataset.
I created 4channel .npy dataset with brats(t1, t1ce, t2, flair).
When I run training, I met some error for like that
assert len(crop_size) == len(data_shape) - 2, "If you provide a list/tuple as center crop make sure it has the same dimension as your data (2d/3d)"
AssertionError: If you provide a list/tuple as center crop make sure it has the same dimension as your data (2d/3d).

before training , I modified dataloader.py for read multichannel data.
data = np.transpose(np.load(patient['data'], mmap_mode='r'), axes=(0, 1, 2, 3))[np.newaxis]
seg = np.transpose(np.load(patient['seg'], mmap_mode='r'), axes=(0, 1, 2, 3))

Could you comment for this error to resolve the problem?
Best,
David.

dataset problem

Hi
I am doing a projet which used to fully automated quantitative
cephalometry in X-ray images. They are on 2D.
So I want to know which type of annotation file I need for trainning?
I have a csv file with all the coordinates of these bounding boxes in each images? Is that ok for your script?

Thank you so much

Memory Error when preprocessing LIDC file

Hi Sir Paul, thank you for your marvelous work

I already convert LIDC dataset to nrrd (nifty) file, then when I want to preprocessed it using preprocessing.py for the first time, I got this error:

warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
/home/ivanwilliam/.virtualenvs/virtual-py3/lib/python3.5/site-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.5/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "preprocessing.py", line 61, in pp_patient
img_arr = resample_array(img_arr, img.GetSpacing(), cf.target_spacing)
File "preprocessing.py", line 48, in resample_array
img = src_imgs.astype(float)
MemoryError
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "preprocessing.py", line 137, in
p1 = pool.map(pp_patient, enumerate(paths), chunksize=1)
File "/usr/lib/python3.5/multiprocessing/pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib/python3.5/multiprocessing/pool.py", line 608, in get
raise self._value
MemoryError

Thanks in advance Sir.

Note: command I run is
python preprocessing.py with change in configs.py directory settings

My computer spec: Intel i7-7700, 16GB RAM, NVIDIA GTX1050Ti 4GB
and I already try to downgrade my NumPy to 1.14.5 based on https://github.com/MIC-DKFZ/batchgenerators README.md recommendation

visualize problem

I have found the result when I choose args.mode == 'analysis': I would like to visualize the result, so how I can make it?

raise ValueError("MirrorTransform now takes the axes as the spatial dimensions. What previously was " ValueError: MirrorTransform now takes the axes as the spatial dimensions. What previously was axes=(2, 3, 4) to mirror along all spatial dimensions of a 5d tensor (b, c, x, y, z) is now axes=(0, 1, 2). Please adapt your scripts accordingly.

Traceback (most recent call last):
File "/home/wgs/medicaldetectiontoolkit/exec.py", line 167, in
train(logger)
File "/home/wgs/medicaldetectiontoolkit/exec.py", line 53, in train
batch_gen = data_loader.get_train_generators(cf, logger)
File "experiments/lidc_exp/data_loader.py", line 76, in get_train_generators
batch_gen['train'] = create_data_gen_pipeline(train_data, cf=cf, is_training=True)
File "experiments/lidc_exp/data_loader.py", line 181, in create_data_gen_pipeline
mirror_transform = Mirror(axes=np.arange(2, cf.dim+2, 1))
File "/home/wgs/batchgenerators/batchgenerators/transforms/spatial_transforms.py", line 192, in init
raise ValueError("MirrorTransform now takes the axes as the spatial dimensions. What previously was "
ValueError: MirrorTransform now takes the axes as the spatial dimensions. What previously was axes=(2, 3, 4) to mirror along all spatial dimensions of a 5d tensor (b, c, x, y, z) is now axes=(0, 1, 2). Please adapt your scripts accordingly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.