GithubHelp home page GithubHelp logo

abe404 / rootpainter3d Goto Github PK

View Code? Open in Web Editor NEW
18.0 4.0 10.0 718 KB

RootPainter3D: Interactive-machine-learning enables rapid and accurate contouring for radiotherapy

License: GNU General Public License v3.0

Python 98.65% NSIS 0.76% Shell 0.59%
machine-learning interactive-machine-learning human-in-the-loop deep-learning contouring segmentation annotation 3d radiotherapy

rootpainter3d's Introduction

RootPainter3D

This software is not approved for clinical use. Also please See LICENSE file.

Described in the paper "RootPainter3D: Interactive-machine-learning enables rapid and accurate contouring for radiotherapy".

Preprint is available at: https://arxiv.org/pdf/2106.11942.pdf

Published version is available at: http://doi.org/10.1002/mp.15353

Server setup

For the next steps I assume you have a dedicated GPU with CUDA and cuDNN installed. A recent version of python 3 is required.

  1. Clone the RootPainter3D code from the repository and then cd into the trainer directory (the server component).
git clone https://github.com/Abe404/RootPainter3D.git
cd RootPainter3D/trainer
  1. To avoid alterating global packages, I suggest using a virtual environment:
python -m venv env --clear

Note: Make sure to use python3. You may need to write python3 instead of python to do this.

  1. Then activate the virtual environment:

On Linux:

source ./env/bin/activate

On Windows:

env\Scripts\activate.bat
  1. Install PyTorch by following the instructions at the pytorch website

  2. Install the other dependencies in the virtual environment:

pip install -r requirements.txt
  1. Then, simply run RootPainter by:
python main.py

This will first create the sync directory.

You will be prompted to input a location for the sync directory. This is the folder where files, including instructions, are shared between the client and server. I will use ~/root_painter_sync

The RootPainter3D server will then create folders inside ~/root_painter_sync and start watching for instructions from the client.

You should now be able to see the folders created by RootPainter (datasets, instructions and projects) inside ~/root_painter_sync on your local machine.

Client setup

  1. Clone the RootPainter3D code from the repository and then cd into the painter directory (the client component).
git clone https://github.com/Abe404/RootPainter3D.git
cd RootPainter3D/painter
  1. To avoid alterating global packages. I suggest using a virtual environment. Create a virtual environment
python -m venv env

And then activate it.

On linux/mac:

source ./env/bin/activate

On windows:

env\Scripts\activate.bat
  1. Install the other dependencies in the virtual environment
pip install -r requirements.txt
  1. Run the client.
python src/main.py

To interactively train a model and annotate images, you will need to add a set of compressed NIfTI images (.nii.gz) to a folder in the datasets folder and then create a project using the client that references this dataset.

For more details, see the published article.

Contribution Guidelines

When we have tests, they should pass.

rootpainter3d's People

Contributors

abe404 avatar andreped avatar pukrising avatar rohanorton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rootpainter3d's Issues

Enable annotations to be assigned to segmentations

Segmentations are generated during the interactive training process i.e for each image the user annotates.

In some cases - for example semi-automatic image segmentation, it can be useful to work with these segmentations.

Right now the segmentations are available on disk and their corresponding annotations are also available on disk but it is not trivial for users to assign their corrections i.e the annotation file to the corresponding segmentation file in such a way that they get a corrected segmentation, which is what they would likely want in a semi-automatic image segmentation workflow.

There should be an option in the extras menu in the client software to allow the user to perform this function.
The form should include segmentation folder directory field, annotation folder field , output folder field and submit button. The user should be given feedback whilst the corrections (annotations) are being assigned and the corrected segmentations are being generated.

Executables and binary installers + CI

As discussed in PR #7, we thought it was a good idea to setup an workflow for building an installer of the software.

However, this requires us to restructure the code such that the program itself can be installed and run as an executable. The core code could likely be merged into the executable, but there is likely some other stuff that should live outside the executable. Not sure.

I can make an initial attempt and submit a PR when I have something somewhat working.

segment folder function not working for dev_mc branch.

A problem has been reported with segment folder not working in the dev_mc branch.

Looks like the following is causing the issue:

segment_folder.py", line 87, in segment_folder
assert seg_classes[0][0] == 'Background'
AssertionError

I will work on first reproducing this issue and then working on a solution (it's probably quite simple to fix).

Multiclass train/validation split is made repeatedly across each class

The decision of whether to add images to the validation or training set is made when an annotation is saved. This however happens for each class and so images may be in the training set for one class and validation set for another class. The model could therefore potentially overfit to the validation set.

A simple fix could be to check if the file is already in the training or validation set for another class before choosing the set.

Retry error on loading validation set images in multiclass project

I believe I have tracked the error down to the line 336 in im_utils.py.

"all_annot_fnames = set(cur_annot_fnames + prev_annot_fnames)"

Reading line 338 makes me assume that the variable "all_annot_fnames" is supposed to match index-wise with all_dirs, as this is what cur_annot_fnames does. However, cur_annot_fnames has repeated elements, because it contains filenames across multiple directories corresponding to each of the multiple classes. Moreover the set datastructure does not preserve order, so any correspondence between cur_annot_fnames and all_dirs is lost. The end result is that sometimes filenames are assumed to be in directories they are not actually present in and it therefore triggers the retry error. If my interpretation is correct, I also assume that this means a much smaller validation set is actually used than what is present, as filenames are unique across classes.

handle images that are smaller than patch size

Right now the system crashes if an image is smaller than the network patch size. The network patch size should just be made smaller automatically (or we could automatically pad the image, but I think this makes less sense).

CryptographyDeprecationWarning

I get this warning when starting the trainer:

RootPainter3D/trainer/env/lib/python3.10/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated "class": algorithms.Blowfish,

ignore 'hidden' files when creating a project

I had created a project using the hepatic vessels data from http://medicaldecathlon.com/

The dataset had files in it such as ._hepaticvessel_333.nii.gz

As far as I know, these are metadata files for OSX. I think it is safe to assume if a file starts with . then it should not be read by RootPainter3D.

Alter the create project functionality to ignore such files as I do not want them to end up in the created project file.

lock files

There is not currently a clear way for two users to work on the same project. A problem is that two users could open the same file and then over-write each others annotations when saving.

A lock file system could solve this.

  • A locks folder could be used. This folder could be created if it doesn't exist when lock files are created (to allow backward compatability with older projects).
  • When a user views an image a lock file could be created in the locks folder. The lock file will just be a single file with the same name as the viewed image. The contents of the lock file could be the name of the user viewing the file (taken by getting the current username, which is available from python, see https://stackoverflow.com/a/842096).
  • When a user stops viewing an image, i.e when closing the application or when changing to a different image, the lock file should be removed (deleted).

A possibily limitation / issue that may arrise is that if the application crashed for some reason or was quit unexpectedly, their may not be a clean way to delete the lock file. I will consider this issue later but some possible solutions include:

  1. Try to have some sort of error handler delete the lock file if it is possible to handle an exception before quiting the application.
  2. Make users aware of the lock file situation and the assoicated folder. They can be told how to manually delete the lock files if appropriate. - A crash should not be part of normal behaviour anyway so perhaps this is enough. Perhaps it may also make sense to inform users of any existing lock files when they open the application, as these will anyway alter functioanlity. We could display an alert message (only if there are some lock files) that the following lock files exist, what it means and their location on the file system. Perhaps a more techincal knowledable user would then know if it is appropriate to delete the lock files or not (perhaps it's better to leave this decision to the users anyway).
  3. When the user who the lock file is associated with opens the application again, then the lock file could be deleted as we can assume the user is no longer viewing a file.

Usage instructions unclear

Just tried to install the generated binary release from the latest artefact and it crashes.

I believe we should thoroughly test if the software can be run as a binary.

Ideally, the user should not need to run anything from the terminal, that somewhat defeats the purpose of having a user friendly tool, if the end user is forced to open a terminal to use the tool.

Might be that we have to run something from the terminal, for instance to setup training on the server, but ideally we should not need to.

Perhaps we could schedule a meeting discussing this further and getting RootPainter3D running on my machine?

When all this is running it would also be ideal to make a simple tutorial video demonstrating how to get started and using the software :) We could make this as an Issue (or feature request rather) when that is relevant.

Help > Keyboard shortcuts does not work

The "Keyboard shortcuts" menu item does not display anything. It prints the following error

Traceback (most recent call last):
File "main.py", line 80, in
init_root_painter()
File "main.py", line 66, in init_root_painter
exit_code = app_context.app.exec_()
File "/RootPainter3D/painter/src/main/python/root_painter.py", line 422, in show_shortcut_window
self.shortcut_window = ShortcutWindow()
File "/RootPainter3D/painter/src/main/python/about.py", line 798, in init
self.initUI()
File "/RootPainter3D/painter/src/main/python/about.py", line 859, in initUI
table.horizontalHeader().sectionSizeFromContents(True)
RuntimeError: no access to protected functions or signals for objects not created from Python

Linux Debian 10
Python 3.7.3
PyQt 5.14.2

Can be solved by removing

table.horizontalHeader().sectionSizeFromContents(True)

It then looks like this screenshot.
Skærmbillede fra 2021-11-15 15-52-00

Dont start training when changing between classes.

In the 3D multiclass version, when changing the current class to be annotated a start training instruction is sent. I think it would be better if a start training instruction is only be sent when explicitly asked for.

Adapt patch size to available GPU memory

Network input (and corresponding output) patch size does not currently adapt to the available GPU memory.
It's possible for users to modify the patch size to suit their GPU memory requirements but it's annoying and error prone to do so.
Users shouldn't shouldn’t have to think about this. Patch size adaptation to available GPU memory should happen automatically

RootPainter max workers specifiable

RootPainter server eats up all the CPU resources, which may lead to problems running the server and client on the same machine. It also makes it hard to run other things in parallel.

have patch size as input argument

The patch size is currently found automatically by testing various input sizes and seeing which ones cause a memory error.
This seems to be time consuming and slows the development cycle.

Suggested steps:

  1. Profile how long it currently takes to find the patch size.
  2. If it is considerable (more than a couple of seconds or so) then provide patch dimensions as input arguments.

auto-build

The CI build process does not yet work.

the plan is that I will get it working for ubuntu 20.04 (keep altering the main.spec file until it builds) and then Andre will get it working with ubuntu 18

Trainer fails because of dtype mismatch in call to cnn function

Trainer fails because of dtype mismatch in call to cnn function.
Log output below. Input data is uint16.

found instruction segment_-8639686351424129568
execute_instruction segment
segmented input shape (235, 235, 235)
segment image, input shape = (235, 235, 235)
Exception parsing instruction Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same Traceback (most recent call last):
File "/trainer/trainer.py", line 154, in execute_instruction
getattr(self, name)(config)
File "/trainer/trainer.py", line 509, in segment
overwrite=overwrite)
File "/trainer/trainer.py", line 592, in segment_file
out_d, classes, bounded)
File "/trainer/model_utils.py", line 147, in ensemble_segment_3d
pred_maps = segment_3d(cnn, image, batch_size, in_patch_shape, out_patch_shape)
File "/trainer/model_utils.py", line 244, in segment_3d
outputs = cnn(tiles_for_gpu).detach().cpu()
File "/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/trainer/unet3d.py", line 129, in forward
out1 = self.conv_in(x)
File "/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/python3.6/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/python3.6/site-packages/torch/nn/modules/conv.py", line 590, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/python3.6/site-packages/torch/nn/modules/conv.py", line 586, in _conv_forward
input, weight, bias, self.stride, self.padding, self.dilation, self.groups
RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same

Support 3 views

There are two views, the main and "sagittal". Would be nice if the third view could also be seen. Maybe by toggling what is shown in the sagittal window.
I am annotating some structures that are much easier to differentiate in the third view.

Add support for .nii files

See #9 (comment)

Andre wants to test the software with .nii files. I think that it's probably easy to add support for this considering we already support compressed nifty (nii.gz).

handle floating point values in images

Right now the contrast settings assume that the images contain integer HU values, but some datasets for example the the hepatic vessels data from http://medicaldecathlon.com/ includes floating point values.

Looking at the data it seems it's safe to cast it to int. I will do that for now until we encounter a dataset that actually needs floating point values.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.