GithubHelp home page GithubHelp logo

nibtehaz / multiresunet Goto Github PK

View Code? Open in Web Editor NEW
422.0 422.0 86.0 450 KB

MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation

License: MIT License

Python 21.19% Jupyter Notebook 78.81%
medical-imaging segmentation-models

multiresunet's Introduction

Hi there 👋, my No.Visitor Count vistor!

About Me

  • I'm currently a CS PhD Student at Purdue University.
  • I received my BSc (EEE) and MSc (CSE) degree from Bangladesh University of Engineering and Technology.
  • My interests include Computer Vision, Biomedical Image and Signal Processing, Bioinformatics
  • Feel free to contact me: [email protected]

🛠  Tech Stack

Python  JavaScript  Java  C  C++  R (Statistics)
React  Node.js  Django  Flask  Bootstrap
HTML  CSS  Git  GitHub  Markdown
Visual Studio Code  RStudio  Eclipse

⚙️  GitHub Analytics

multiresunet's People

Contributors

nibtehaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiresunet's Issues

Validation quality metrics "get stuck"

With certain splits of training, validation and test data sets, I can always observe a strange behaviour that does not occur with a different distribution of the same overall data set. Namely, the values on the validation data seem to stand still from the beginning during training, on enormously bad numbers, while the values on the training data keep improving. Unfortunately, it is not really possible to reconstruct which combination of images triggers this behaviour, but it probably really depends on a combination and not on individual images. Is such a behaviour known, or even a solution for it?

Estimated 76.0 GB model memory usage
Train shape: (1166, 672, 672, 1); Val shape: (292, 672, 672, 1); Test shape: (59, 672, 672, 1)
  0%|| 2/15000 [04:11<523:59:44, 125.78s/epoch, val_loss=12.7, val_jaccard_round=0.171, loss=0.43, jaccard_round=0.748]

Cannot read saved weights

I really love this project, but now I faced my first problem. I used your function to save the model:

def saveModel(model):
    model_json = model.to_json()
    try:
        os.makedirs('models')
    except:
        pass
    fp = open('models/modelP.json','w')
    fp.write(model_json)
    model.save_weights('models/modelW.h5')

And I tried to read it like this:

def load_model(model_dir):
	# load json and create model
	json_file = open(model_dir + 'modelP.json', 'r')
	loaded_model_json = json_file.read()
	json_file.close()
	loaded_model = model_from_json(loaded_model_json)
	# load weights into new model
	loaded_model.load_weights(model_dir + "modelW.h5")
	loaded_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
	print("Loaded model from " + model_dir)
	return loaded_model

But that is the error message I get when I try to use "load_weights" - at least for some of the files:

  File "/home/x/anaconda3/envs/MultiResUNet/lib/python3.7/site-packages/h5py/_hl/files.py", line 312, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
  File "/home/x/anaconda3/envs/MultiResUNet/lib/python3.7/site-packages/h5py/_hl/files.py", line 142, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (bad object header version number)

Any chance I can recover my hard trained models?

About the network

out = Activation('relu')(out)

Hi,
I have some confusion about the network:

  1. Generally, BN layer is followed by Activation, this can be found in official Resnet's implement, I want to know your ideas, thx.
  2. Why you place anothor Activation and BN layer after every feature fusion(add/concat), is it based on experimental results?

Code

Is there more complete code(sunch as training process,dataset)?

First convolution layers in MultiResUBlock and ResPath don't use any activations

Hi there,

I'm about to experiment and use a modified version of your architecture in my master's thesis. I saw in your code that the first convolution layers in MultiResUBlock and ResPath don't use any activation function. What was the reason you decided not to use an activation function here ???

shortcut = conv2d_bn(shortcut, filters, 1, 1,activation=None, padding='same')

Cheers,
H

Demo for the ISBI 2012 challenge dataset

In your paper, it appears that you tested this implementation using the ISBI 2012 challenge dataset - Electron Microscopy Image.
Do you have a sample demo notebook you could share?

ResourceExhaustedError MultiResUNet3D

When I try to train the MultiResUNet3D model with input shape = (128, 128, 128, 1) and batch size = 1, keras raises this exception:

ResourceExhaustedError: OOM when allocating tensor with shape[1,128,128,128,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[{{node training/Adadelta/gradients/zeros_70}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[{{node loss/mul}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

also, I used the following function to calculate the gpu memory that keras needs:

# function taked from:
# https://stackoverflow.com/questions/43137288/how-to-determine-needed-memory-of-keras-model

def get_model_memory_usage(batch_size, model):
    shapes_mem_count = 0
    for l in model.layers:
        single_layer_mem = 1
        for s in l.output_shape:
            if s is None:
                continue
            single_layer_mem *= s
        shapes_mem_count += single_layer_mem
    trainable_count = np.sum([K.count_params(p) for p in set(model.trainable_weights)])
    non_trainable_count = np.sum([K.count_params(p) for p in set(model.non_trainable_weights)])
    number_size = 4.0
    if K.floatx() == 'float16':
         number_size = 2.0
    if K.floatx() == 'float64':
         number_size = 8.0
    total_memory = number_size*(batch_size*shapes_mem_count + trainable_count + non_trainable_count)
    gbytes = np.round(total_memory / (1024.0 ** 3), 3)
    return gbytes

and the output when I execute the next code:

# where "model3D" is the MultiResUNet3D model.
get_model_memory_usage(1, model3D) # output 21.628 "GB"

# where "model2D" is the MultiResUNet3D model.
get_model_memory_usage(1, model2D) # output 0.256 "GB"

it is normal that the model does not run?
How much GPU memory do I need?

my GPU is a gtx 1080ti (11GB)

version of the network

Hello, can you provide the pytorch version of the network implementation code, I tried to reproduce your MultiResUnet code here, but problems continue to occur, thank you

Is multiclass segmentation possible?

Hello,

I want to implement your model in a cell segmentation task. Since I have to segment my images into 4 classes (background, healthy cells, cancer cells and leucocytes) I wanted to ask if your model can be easily modified to support multiclass segmenation.
I have little experience in deep learning and python so help would be much appreciated!

Validation / training scores mismatch

Hi,

I have run your network based on the notbook in a project of mine. However, I pondered quite a bit over my validation Jaccard scores outperforming the training score by a large margin. I suspect the answer lies in the rounding of yp that you perform in evaluateModel. From what I can tell, this rounding is not done in the function that is used during training. After removing this rounding the scores matched as expected.

Please let me know if I'm missing the point somewhere, or if you agree with the observation.

Thanks for a superb piece of work!

Arild

About the pytorch version of MultiResUNet3D.py

Hello.

  1. Did you run through the pytorch framework with the pytorch version MultiResUNet3D.py?
  2. Whether the experimental results given in the paper "MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation" have been realized?

About the ISIC-17 Dataset download link in the demo

Hi! It is an honor to read your paper "MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation", and now I want to reproduce the code corresponding to the paper. Unfortunately, the ISIC-17 Dataset download link provided in your demo doesn't seem to work,How should I solve this problem?
Words fail me when I wish to express my sincere gratitude to you.Looking forward to your early reply!

ValueError: cannot reshape array of size 106168320 into shape

Hi , I'm trying to train a MultiResUNet model (2D) with my own data consisting of 300 images and 300 masks.
But I got the error in the preprocessing part (..reshape.part) (as in images attach), I think it is because of the Y_test , Y_train part and the differnces of dataset used. I'd like to know how can I modifiy the code in order to get it running correctly.
Thanks, lots

Screen Shot 2020-01-27 at 11 08 07 AM

Screen Shot 2020-01-27 at 11 07 50 AM

Train in my own dataset

Hi!
First of all, thank you very much for sharing your project. It's a great job, congratulations!

I would like to train your network with my own dataset. I've .nii files, so, what I should to do is to stack all my dataset in a numpy array?
For example, if I've 20 .nii files with shape [20,256,256], I create an array of [400,256,256]. That's it?

Thank you ver much in advance.

All the best :)

Dice and Jaccard score not improving

Hi @nibtehaz , I've been trying your model for my image segmentation projects (with same settings) but with 365 images which split (0.2) into 292 for training and 73 for testing and manage to get a fairly good results of the dice and Jacard score. But I found out that the jacard score is still at around 0.7 for too many epochs (20-30 epochs) and stop increasing even after 100 epochs. And i found out among those 73 testing images , for some predicted image the jacard score did go up to 0.91 for some only for 0.65 which totally average to around 0.7.
I know it should be the dataset problems , I'd like if there is any way to improved the overall jacard score with that same dataset ? Should I just keep training ? Or make some changes to the dataset ?

Use of trans_conv2d_bn

Hi,

Thank you very much for your work. I'm trying to implement the MultiResUNet as a generator of a GAN. I would like to know if the definition of trans_conv_2d has any purpose since it is not used. Also, why did you decide not to use dropout?

Thanks!

About pytorch version

Hi! It is a great honor to see your research paper "MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation". Now I want to reproduce the corresponding code of the paper. Can you provide the pytorch version of the code?

/bin/sh: 1: unzip: not found

When I'm running the Demo notebook, I get this when I run the code block of extracting the zip files.

Does anyone know how to fix this? Thanks!

(p.s. I am a beginner in python)

Generalization problem

Have you tried some datasets with separate valid dataset and test dataset?I found that I got a good result in train dataset and a bad result in valid dataset in ISIC 2016 and ISIC 2017

How we can predict the model?

Dear Author,
I did not found any dataloader and prepared code for dataset organization. How we train the model. Moreover, How we can test the performance of the model?

ValueError: logits and labels must have the same shape ((None, 192, 256, 3) vs (None, 192, 256, 1))

i am trying to use this architecture for multiclass segmentation by replacing conv10 = conv2d_bn(mresblock9, 3, 1, 1, activation='sigmoid') to conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid') with dataset like this
image
but i get error : ValueError: Input 0 of layer conv2d_115 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape (None, 192, 256, 3)
my question is.

  1. is it enough just to change the number 1 to number 3 in the section conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid')?
  2. does the activation function still use sigmoid?
  3. does the loss function still use binary_crossentropy?

thank

Jaccard Index & Dice coefficient remains zero throughout

Hello,
I have tried using the MultiRes U-Net Architecture on a different dataset. My accuracy and loss works well. However, the Jaccard index and DICE coefficient remains always zero. Could you please let me know why this might happen and any possible suggestions for this?

How to implement 5-fold cross validation

Hello Nabil,

I check your demo code, i want to to implement 5-fold cross validation in it, and i never found any help anywhere.

Please can you share how i can implement 5-fold cross validation on this code?

Thanks

Bad Nuclei Segmentation performance

Dear Authors of MultiResUNet,
Greetings, Thank you for your useful contribution to the research community.

I am re-implementing your model to generate and compare the results with my model for publication in a reputable journal.
For the other 3 datasets, your model performance is as good as mentioned in the paper, But for this Nuclei Segmentation dataset, the IoU cannot increase beyond 40%.
Dataset link: https://www.kaggle.com/gangadhar/nuclei-segmentation-in-microscope-cell-images

I think you are using the same dataset in your paper, I am using the same loss functions/ learning rates as in your GitHub and trained for epochs ranging from 20 to 80, but still, I could not produce on-par results.
I have to mention the result in my paper, can you please help me with this issue.
Thank You,
Regards,

MultiResBlock Architechture Explanation

Hi, I wanted to ask about the detail regarding the MultiResBlock architecture. In the Figure 3(a) where you concatenating the generated feature maps. How can you concatenate different size of feature maps. Because as shown in the Figure 3(a), those layers are 3x3, 5x5, and 7x7 which each block have different size of kernel. Can you provide more detail explanation on the Figure 3(a). Thankyou.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.