GithubHelp home page GithubHelp logo

tgd15 / region-specific-u-nets-rectal-cancer Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 104.64 MB

U-Nets to annotate post-chemoradiotherapy (CRT) imaging

License: BSD 3-Clause "New" or "Revised" License

Python 41.64% MATLAB 58.36%

region-specific-u-nets-rectal-cancer's Introduction

U-Nets for Post-Treatment Rectal Cancer Imaging

Overview

2 deep learning U-Nets were developed to segment the outer rectal wall and the lumen on post-treatment rectal cancer MRI. Segmentations can then be viewed in 3D Slicer. You can clone this repository and run these U-Nets in less than 5 minutes.

Table of Contents

Dependencies and Installation

Dependencies

The following dependencies are required to run the U-Nets:

- Python 3.7.7
- Tensorflow 1.15.0 or higher
- Keras 2.2.4 or higher
- SimpleITK 1.2.4 or higher
- opencv 3.4.2
- Pillow 7.1.2 or higher
- h5py 2.8.0 or higher
- matplotlib 3.2.2 or higher
- numpy 1.18.1 or higher

Note running the U-Nets on higher versions of each package has not been tested. If you are running higher versions of the packages listed above, please submit a pull request to indicate whether the code runs or breaks for each dependency.

Installation

To clone and run this application, you'll need Git installed on your computer. From your command line:

# Clone this repository (assuming you are using the HTML protocol)
git clone https://github.com/tgd15/Post-Treatment_Unet.git

# Go into the repository
cd Post-Treatment_Unet

If you have Conda installed, create a Conda environment from the environment.yml file in the repository root and activate it.

  • On Windows:
conda env create --file environment.yml
conda activate postcrt
  • On MacOS/Linux:
conda env create --file environment.yml
source activate postcrt

If you don't use Conda, ensure your environment has the required dependencies noted in the dependencies.

Running the Code

Once you have everything installed, run the code in the postcrt conda environment by executing run_Unet.py from your command line. run_Unet.py requires two arguments:

  1. --m, specify the U-Net you want to run as a string. Allowed choices are Outer_Rectal_Wall or Lumen.
  2. --i, specify the filepath to .mha file you want to annotate as a string.

Here are examples demostrating how to run the U-Nets:

Running Outer Rectal Wall U-Net:

python run_Unet.py --m "Outer_Rectal_Wall" --i "/GoogleDrive/Shared drives/INVent_Data/Rectal/newdata/UH/RectalCA145-2/RectalCA145-2_Post_Ax.mha"

Running Lumen U-Net:

python run_Unet.py --m "Lumen" --i "/GoogleDrive/Shared drives/INVent_Data/Rectal/newdata/UH/RectalCA145-2/RectalCA145-2_Post_Ax.mha"

Once the code finishes running, a .mha file with prediction_label appended to the end of the filename will be saved in the current directory. This .mha file contains the segmentations generated by the U-Net.

Visualizing the Results

The .mha file ending in prediction_label is designed to be view in 3D slicer. Simply drag the file from your file explorer/finder into 3D slicer and overlay it on the MRI.

In 3D slicer, segmentations generated by outer rectal wall U-Net will be label 8. Segmentations generated by lumen U-net will be label 2.

Troubleshooting

If you have questions, open an issue on the repository.

If you find a bug, please fix it and submit a pull request.


INVent Lab Logo

region-specific-u-nets-rectal-cancer's People

Contributors

tgd15 avatar

Watchers

 avatar

region-specific-u-nets-rectal-cancer's Issues

Specify interpolation type

When resizing images, the interpolation is always linear. This does not work for mask.

When resizing masks, the interpolation should be nearest neighbor. Update the resize_img function in resize_img.py to use nearest neighbor interpolation for masks.

Export U-Net Segmentations as .mha file

Once the segmentations are thresholded properly, export U-Net segmentations as .mha files. The .mha files of the segmentations can then be viewed in 3D slicer.

Update segmentation labels

Once the segmentations for a patient are thresholded, the labels of the segmentation masks need to be changed based on which U-Net is loaded.

If the lumen U-Net is loaded, the labels for the segmentation masks should be 2 where there is 1.

If the outer rectal wall U-Net is loaded, the labels for the segmentation masks should be 8 where there 1.

bug

The corrected code:

#!/usr/bin/env python3

-- coding: utf-8 --

"""
Created on Fri Jan 15 15:34:51 2021

@author: Tom
"""

import load_data as ld
import load_Unet as lm
import resize_img as ri
import numpy as np
import SimpleITK as sitk
import argparse
import ntpath
import os
os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'

Initialize argument parser

parser = argparse.ArgumentParser(description='Specify the Unet and image to annotate.')
parser.add_argument("--m", required=True, choices=['Lumen','Outer_Rectal_Wall'], type=str, help="Choose which U-Net model and thus region to annotate on image.")
parser.add_argument("--i", required=True, type=str, help='Absolute path to .mha file that will be annotated. Must include the .mha file extension.')

Parse arguments

args = parser.parse_args()

Load in the model

Model_Name = 'Training_' + args.m + '_Unet.hdf5'
unet = lm.load_Unet_model(Model_Name)

Load in the image

image_path = args.i
image_np, image_mha = ld.load_image(image_path)

Predict on the slices

print ("Predicting on image...")
predictions = unet.predict(image_np, verbose=1)

Threshold the slices

if('Lumen' in Model_Name):
threshold = 0.901763019988
if('Outer_Rectal_Wall' in Model_Name):
threshold = 0.976268057096

predictions = (predictions > threshold).astype(np.float32)

Squeeze the predictions

predictions = predictions.squeeze()

Reshape the predictions

predictions_reshaped = np.swapaxes(predictions, 0, 2) # Switch to (x,y,z)
new_size = image_mha.GetSize()
new_size = new_size[:2]
predictions_reshaped = ri.resize_img(predictions_reshaped, new_size, nn=True) # Resize to original image size
predictions_reshaped = np.transpose(predictions_reshaped,[2,0,1]) # Swtich to (z,y,x) because converting it to a .mha file will flip the axes back to (x,y,z)

Set lumen label

if('Lumen' in Model_Name):
predictions_reshaped[predictions_reshaped == 1] = 2
if('Outer_Rectal_Wall' in Model_Name):
predictions_reshaped[predictions_reshaped == 1] = 8

Create export filename

filename = ntpath.basename(image_path)
if('Lumen' in Model_Name):
export_name = filename[:len(filename)-4] + '_Lumen_prediction_label' + filename[(len(filename)-4):]
if('Outer_Rectal_Wall' in Model_Name):
export_name = filename[:len(filename)-4] + '_ORW_prediction_label' + filename[(len(filename)-4):]

Export the mask as .mha file

export_vol = sitk.GetImageFromArray(predictions_reshaped)

export_vol.SetSpacing(image_mha.GetSpacing())
export_vol.SetDirection(image_mha.GetDirection())
export_vol.SetOrigin(image_mha.GetOrigin())

sitk.WriteImage(export_vol, export_name)

test = sitk.PermuteAxesImageFilter()

test.SetOrder([2,0,1])

export_vol = test.Execute(export_vol)

Conda environment has incorrect tf version

References issue #10

  • Conda environment did not specify tensorflow version 1.15.0
  • Instead, conda default to most current version of tensorflow 2.x
  • yml file includes whole environment definition without OS-specific build definitions. This is not necessary. yml file should contain only explicit specifications only.

For more information about OS-specific build definitions and explicit specifications, see this thread

Refactor load_data

Refactor load_data.py into the following functions:

  1. Function for loading and view images
  2. Function for zero-mean normalization
  3. Function for resizing each image slice

Suppress Tensorflow Warnings

The following warnings need to be suppressed because this package is using an older version of Tensorflow:

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:245: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2021-04-27 18:39:41.172426: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2021-04-27 18:39:41.173013: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 16. Tune using inter_op_parallelism_threads for best performance.
WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /Users/Tom/anaconda/envs/tensorflow/lib/python3.7/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

Change segmentation filenames

Adjust output segmentation filenames to reflect which region was segmented.

For outer rectal wall segmentations, the filename should look like:

RectalCA###_Post_Ax_ORW_prediction_label.mha

For lumen segmentations, the filename should look like:

RectalCA###_Post_Ax_Lumen_prediction_label.mha

Update thresholds

Update threshold value for thresholding segmentations based on which U-Net is loaded.

Select U-Net Model

Add functionality to select one of the following U-Net models to load:

  • Lumen U-Net
  • Outer Rectal Wall U-Net
  • Rectal Wall U-Net

Crop to Bounding Box

When predicting on an image, the 2D slice is not cropped to a bounding box containing centered around the rectum.

Pre-processing workflow:

  1. Zero-mean normalization
  2. Crop to bounding box around the rectum --> this is missing!
  3. Resize cropped images to (128,128)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.