GithubHelp home page GithubHelp logo

project-monai / monailabel Goto Github PK

View Code? Open in Web Editor NEW
563.0 24.0 184.0 48.83 MB

MONAI Label is an intelligent open source image labeling and learning tool.

Home Page: https://docs.monai.io/projects/label

License: Apache License 2.0

Python 77.32% Shell 1.29% CMake 0.46% Batchfile 0.04% JavaScript 10.27% Stylus 0.65% Dockerfile 0.14% Java 3.86% TypeScript 4.43% CSS 1.54%
3d-slicer-extension active-learning pytorch deep-learning monai segmentation medical-imaging machine-learning 3d

monailabel's Introduction

MONAI Label

License CI Build Documentation Status PyPI version Azure DevOps tests (compact) Azure DevOps coverage codecov

MONAI Label is an intelligent open source image labeling and learning tool that enables users to create annotated datasets and build AI annotation models for clinical evaluation. MONAI Label enables application developers to build labeling apps in a serverless way, where custom labeling apps are exposed as a service through the MONAI Label Server.

MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. It is an open-source and easy-to-install ecosystem that can run locally on a machine with single or multiple GPUs. Both server and client work on the same/different machine. It shares the same principles with MONAI.

Refer to full MONAI Label documentations for more details or check out our MONAI Label Deep Dive videos series.

Refer to MONAI Label Tutorial series for application and viewer workflows with different medical image tasks. Notebook-like tutorials are created for detailed instructions.

Table of Contents

Overview

MONAI Label reduces the time and effort of annotating new datasets and enables the adaptation of AI to the task at hand by continuously learning from user interactions and data. MONAI Label allows researchers and developers to make continuous improvements to their apps by allowing them to interact with their apps at the user would. End-users (clinicians, technologists, and annotators in general) benefit from AI continuously learning and becoming better at understanding what the end-user is trying to annotate.

MONAI Label aims to fill the gap between developers creating new annotation applications, and the end users which want to benefit from these innovations.

Highlights and Features

  • Framework for developing and deploying MONAI Label Apps to train and infer AI models
  • Compositional & portable APIs for ease of integration in existing workflows
  • Customizable labeling app design for varying user expertise
  • Annotation support via 3DSlicer & OHIF for radiology
  • Annotation support via QuPath, Digital Slide Archive, and CVAT for pathology
  • Annotation support via CVAT for Endoscopy
  • PACS connectivity via DICOMWeb
  • Automated Active Learning workflow for endoscopy using CVAT

Supported Matrix

MONAI Label supports many state-of-the-art(SOTA) models in Model-Zoo, and their integration with viewers and monaibundle app. Please refer to monaibundle app page for supported models, including whole body segmentation, whole brain segmentation, lung nodule detection, tumor segmentation and many more.

In addition, you can find a table of the basic supported fields, modalities, viewers, and general data types. However, these are only ones that we've explicitly test and that doesn't mean that your dataset or file type won't work with MONAI Label. Try MONAI for your given task and if you're having issues, reach out through GitHub Issues.

Field Models Viewers Data Types Image Modalities/Target
Radiology
  • Segmentation
  • DeepGrow
  • DeepEdit
  • 3DSlicer
  • OHIF
  • NIfTI
  • NRRD
  • DICOM
  • CT
  • MRI
Pathology
  • DeepEdit
  • NuClick
  • Segmentation
  • Classification
  • Digital Slide Archive
  • QuPath
  • CVAT
  • TIFF
  • SVS
  • Nuclei Segmentation
  • Nuclei Classification
Video
  • DeepEdit
  • Tooltracking
  • InBody/OutBody
  • CVAT
  • JPG
  • 3-channel Video Frames
  • Endoscopy

Getting Started with MONAI Label

MONAI Label requires a few steps to get started:

Step 1 Installation

Current Stable Version

GitHub release (latest SemVer)

pip install -U monailabel

MONAI Label supports the following OS with GPU/CUDA enabled. For more details instruction, please see the installation guides.

GPU Acceleration (Optional Dependencies)

Following are the optional dependencies which can help you to accelerate some GPU based transforms from MONAI. These dependencies are enabled by default if you are using projectmonai/monailabel docker.

Development version

To install the latest features using one of the following options:

Git Checkout (developer mode) GitHub tag (latest SemVer)
  git clone https://github.com/Project-MONAI/MONAILabel
  pip install -r MONAILabel/requirements.txt
  export PATH=$PATH:`pwd`/MONAILabel/monailabel/scripts

If you are using DICOM-Web + OHIF then you have to build OHIF package separate. Please refer [here](https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/ohif#development-setup).

Docker Docker Image Version (latest semver)
docker run --gpus all --rm -ti --ipc=host --net=host projectmonai/monailabel:latest bash

Step 2 MONAI Label Sample Applications

Radiology

This app has example models to do both interactive and automated segmentation over radiology (3D) images. Including auto segmentation with the latest deep learning models (e.g., UNet, UNETR) for multiple abdominal organs. Interactive tools include DeepEdit and Deepgrow for actively improving trained models and deployment.

  • Deepedit
  • Deepgrow
  • Segmentation
  • Spleen Segmentation
  • Multi-Stage Vertebra Segmentation

Pathology

This app has example models to do both interactive and automated segmentation over pathology (WSI) images. Including nuclei multi-label segmentation for Neoplastic cells, Inflammatory, Connective/Soft tissue cells, Dead Cells, and Epithelial. The app provides interactive tools including DeepEdits for interactive nuclei segmentation.

  • Deepedit
  • Deepgrow
  • Segmentation
  • Spleen Segmentation
  • Multi-Stage Vertebra Segmentation

Video

The Endoscopy app enables users to use interactive, automated segmentation and classification models over 2D images for endoscopy usecase. Combined with CVAT, it will demonstrate the fully automated Active Learning workflow to train + fine-tune a model.

  • Deepedit
  • ToolTracking
  • InBody/OutBody

Bundles

The Bundle app enables users with customized models for inference, training or pre and post processing any target anatomies. The specification for MONAILabel integration of the Bundle app links archived Model-Zoo for customized labeling (e.g., the third-party transformer model for labeling renal cortex, medulla, and pelvicalyceal system. Interactive tools such as DeepEdits).

For a full list of supported bundles, see the MONAI Label Bundles README.

Step 3 MONAI Label Supported Viewers

Radiology

3D Slicer

3D Slicer, a free and open-source platform for analyzing, visualizing and understanding medical image data. In MONAI Label, 3D Slicer is most tested with radiology studies and algorithms, develpoment and integration.

3D Slicer Setup

OHIF

The Open Health Imaging Foundation (OHIF) Viewer is an open source, web-based, medical imaging platform. It aims to provide a core framework for building complex imaging applications.

OHIF Setup

Pathology

QuPath

Quantitative Pathology & Bioimage Analysis (QuPath) is an open, powerful, flexible, extensible software platform for bioimage analysis.

QuPath Setup

Digital Slide Archive

The Digital Slide Archive (DSA) is a platform that provides the ability to store, manage, visualize and annotate large imaging data sets. Digital Slide Archive Setup

Video

CVAT

CVAT is an interactive video and image annotation tool for computer vision. CVAT Setup

Step 4 Data Preparation

For data preparation, you have two options, you can use a local data store or any image archive tool that supports DICOMWeb.

Local Datastore for the Radiology App on single modality images

For a Datastore in a local file archive, there is a set folder structure that MONAI Label uses. Place your image data in a folder and if you have any segmentation files, create and place them in a subfolder called labels/final. You can see an example below:

dataset
│-- spleen_10.nii.gz
│-- spleen_11.nii.gz
│   ...
└───labels
    └─── final
        │-- spleen_10.nii.gz
        │-- spleen_11.nii.gz
        │   ...

If you don't have labels, just place the images/volumes in the dataset folder.

DICOMWeb Support

If the viewer you're using supports DICOMweb standard, you can use that instead of a local datastore to serve images to MONAI Label. When starting the MONAI Label server, we need to specify the URL of the DICOMweb service in the studies argument (and, optionally, the username and password for DICOM servers that require them). You can see an example of starting the MONAI Label server with a DICOMweb URL below:

monailabel start_server --app apps/radiology --studies http://127.0.0.1:8042/dicom-web --conf models segmentation

Step 5 Start MONAI Label Server and Start Annotating

You're now ready to start using MONAI Label. Once you've configured your viewer, app, and datastore, you can launch the MONAI Label server with the relevant parameters. For simplicity, you can see an example where we download a Radiology sample app and dataset, then start the MONAI Label server below:

monailabel apps --download --name radiology --output apps
monailabel datasets --download --name Task09_Spleen --output datasets
monailabel start_server --app apps/radiology --studies datasets/Task09_Spleen/imagesTr --conf models segmentation

Note: If you want to work on different labels than the ones proposed by default, change the configs file following the instructions here: https://youtu.be/KtPE8m0LvcQ?t=622

MONAI Label Tutorials

Content

Cite

If you are using MONAI Label in your research, please use the following citation:

@article{DiazPinto2022monailabel,
   author = {Diaz-Pinto, Andres and Alle, Sachidanand and Ihsani, Alvin and Asad, Muhammad and
            Nath, Vishwesh and P{\'e}rez-Garc{\'\i}a, Fernando and Mehta, Pritesh and
            Li, Wenqi and Roth, Holger R. and Vercauteren, Tom and Xu, Daguang and
            Dogra, Prerna and Ourselin, Sebastien and Feng, Andrew and Cardoso, M. Jorge},
    title = {{MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images}},
  journal = {arXiv e-prints},
     year = 2022,
     url  = {https://arxiv.org/pdf/2203.12362.pdf}
}

@inproceedings{DiazPinto2022DeepEdit,
      title={{DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images}},
      author={Diaz-Pinto, Andres and Mehta, Pritesh and Alle, Sachidanand and Asad, Muhammad and Brown, Richard and Nath, Vishwesh and Ihsani, Alvin and Antonelli, Michela and Palkovics, Daniel and Pinter, Csaba and others},
      booktitle={MICCAI Workshop on Data Augmentation, Labelling, and Imperfections},
      pages={11--21},
      year={2022},
      organization={Springer}
}

Optional Citation: if you are using active learning functionality from MONAI Label, please support us:

@article{nath2020diminishing,
  title={Diminishing uncertainty within the training pool: Active learning for medical image segmentation},
  author={Nath, Vishwesh and Yang, Dong and Landman, Bennett A and Xu, Daguang and Roth, Holger R},
  journal={IEEE Transactions on Medical Imaging},
  volume={40},
  number={10},
  pages={2534--2547},
  year={2020},
  publisher={IEEE}
}

Contributing

For guidance on making a contribution to MONAI Label, see the contributing guidelines.

Community

Join the conversation on Twitter @ProjectMONAI or join our Slack channel.

Ask and answer questions over on MONAI Label's GitHub Discussions tab.

Additional Resources

monailabel's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monailabel's Issues

Training workflow for DeepEdit with DL-based Image Selection

  • Training process automatically starts when the user submits the first label.
  • Training process stops ONLY when the user clicks on the Stop training/Finish/Close System button.
    Not absolutely relevant hyperparameters are batch size and learning rate. To start with, batch size can be set to 2, and learning rate can be set to 1e-4 and decrease by x0.1 every iteration/epoch. The system will set the learning rate to 1e-4 every time the user provides a new label.
  • The system updates the weights’ file every iteration.
  • After the user stops the training, the system keeps/saves the last model/weights and updates the YAML file or App so the user has the option to continue with the pre-trained model.
  • During the training process, a validation step is not needed. The active learning techniques will indirectly serve as a model validation process.
    Validation is different from model quality control (QC). QC will be performed on a different dataset once the clinician/user provides X amount of labels and there is a pre-trained model.
  • The active learning techniques use the core model as inference mode. To do that, they will use the most updated model to select the next sample.
    training_workflow

Save Training Runtime Stats

After each training session output:

  • Training stats in training task
    • We're not saving training related stats
    • Can MONAI provide handlers to save training stats to dump to model folder? (loss, metrics, samples, etc.)
  • Validation stats
  • Model [DONE]
    • If model folder is deleted we should re-create and re-initialize the weights (?)

Add additional info for each image/label pair

Example.. when label is saved, we want to compute the mismatch between original model vs user submitted
Also option to save temp labels (generatd by auto-segmentation) in datastore (for unlabled images)

Authentication strategy and slicer plugin

Given the client/server nature of MONAILabel, it would be interesting to implement some functionality for user authentication. Is this on the roadmap?

In any case, some documentation about its status in MONAILabel would be very useful.

I am particularly interested in the slicer plugin but smilar questions would arise with other client integrations (e.g. OHIF).

Windows installation issues

The following issues were experienced when installing MONAILabel using the installation instructions in the readme on a Windows machine:

  1. pip install requirements.txt: installation of monai-weekly[all] was not possible as the cucim package could not be found. Workaround was to install monai instead.
  2. Several dependencies had to be installed manually: scikit-image, matplotlib==3.3.0 (3.3.4 caused issues), pytorch-ignite, tensorboard, nibabel. These are potentially available in monai-weekly[all].
  3. Running app: native windows filepaths throw escape character error due to use of backslash. Workaround was to type filepaths in using forward slash.
  4. monailabel/main: in lines 75 and 76, os.path.realpath converts filepaths entered with forward slash back into native windows filepaths with backslash, subsequently throwing errors related to escape characters. Workaround was to comment out lines 75 and 76.

Simple API to get label based on image and tag

Currently we have to featch all matching labels for an image id and then iterate over to compare which is matching one..
Better to have simple api for that..

something like:: get_label(image_id, tag)

create/generate template/example App

Simple command line tool to generate an example APP based on template.
This will help researchers to kick start working on their use-case directly

KeyError when trying to run the server

Hi all,

I'm getting this error when trying to start the server:

$ python main.py --app ../sample-apps/segmentation_spleen/ --studies ../datasets/Task09_Spleen                                                                                                          
USING:: app = /home/fernando/git/MONAILabel/sample-apps/segmentation_spleen
USING:: studies = /home/fernando/git/MONAILabel/datasets/Task09_Spleen
USING:: debug = False
USING:: host = 0.0.0.0
USING:: port = 8000
USING:: reload = False
USING:: log_config = None
USING:: dryrun = False

[2021-04-27 14:16:26,216] [INFO] (uvicorn.error) - Started server process [29850]
[2021-04-27 14:16:26,216] [INFO] (uvicorn.error) - Waiting for application startup.
[2021-04-27 14:16:26,216] [INFO] (monailabel.utils.app_utils) - Initializing App from: /home/fernando/git/MONAILabel/sample-apps/segmentation_spleen; studies: /home/fernando/git/MONAILabel/datasets/Task09_Spleen
[2021-04-27 14:16:28,600] [ERROR] (uvicorn.error) - Traceback (most recent call last):
  File "/home/fernando/miniconda3/envs/smonai/lib/python3.7/site-packages/starlette/routing.py", line 526, in lifespan
    async for item in self.lifespan_context(app):
  File "/home/fernando/miniconda3/envs/smonai/lib/python3.7/site-packages/starlette/routing.py", line 467, in default_lifespan
    await self.startup()
  File "/home/fernando/miniconda3/envs/smonai/lib/python3.7/site-packages/starlette/routing.py", line 502, in startup
    await handler()
  File "main.py", line 54, in startup_event
    get_app_instance()
  File "/home/fernando/git/MONAILabel/monailabel/utils/app_utils.py", line 17, in get_app_instance
    return app_instance(settings.APP_DIR, settings.STUDIES)
  File "/home/fernando/git/MONAILabel/monailabel/utils/app_utils.py", line 35, in app_instance
    o = c(app_dir=app_dir, studies=studies)
  File "/home/fernando/git/MONAILabel/sample-apps/segmentation_spleen/main.py", line 27, in __init__
    active_learning=MyActiveLearning()
  File "/home/fernando/git/MONAILabel/monailabel/interface/app.py", line 32, in __init__
    self._dataset: Datastore = LocalDatastore(studies)
  File "/home/fernando/git/MONAILabel/monailabel/interface/datastore_local.py", line 134, in __init__
    validate(self._dataset_config['objects'], LocalDatastore._schema['properties']['objects'])
KeyError: 'objects'

[2021-04-27 14:16:28,600] [ERROR] (uvicorn.error) - Application startup failed. Exiting.

Once Training finishes 'Stop Training' button should not require to be pressed. The signal should be sent automatically

Describe the bug
After annotating a few samples ('N' samples) once the button 'update model' is pressed. It begins to train the model, after the training finishes and the stats are reported. A signal should be sent to the Slicer 3D that training has been stopped. At the moment, the 'Stop Training' button has to be pressed

To Reproduce
Steps to reproduce the behavior:

  1. Follow the installation instructions and add the module to Slicer
  2. Load the data, annotate a few samples, hit the 'Update Model'
  3. Wait for training to finish and the stats to get reported
  4. In slicer 3D now the 'Stop training' button should not need to be pressed manually.

Expected behavior
The expectation is that the 'Stop Training' button should not be pressed manually.

Screenshots
Here is a screenshot
image

Wrong Orientation/Origin for multichannel volumes (4D) using Slicer - Issue visualizing BRATS images on 3DSlicer

BRATS and prostate, which have more than one modality. It seems we're having the same problem I had before. Once Slicer loaded the volume, I didn't find a way to get back all the modalities

This was something I left in the to-solve list. As workaround solution, I get the modalities before I loaded in Slicer

In addition to the multimodality issue in Slicer, the BRATS data seems to have a problem with the orientation

Datalist/Studies to support datasets based onLocal FileSystem

Need to create a base class something like Studies which will have following 2/3 implementations

  • LocalStudies (file://)
  • RemoteStudies (http://)
  • DICOMStudies (dicom)
class Studies:
  def list_images(filters):
    pass
  def list_labels(filters):
    pass
  def get_image(filters):
    pass
  def get_label(filters):
    pass
  def save_label(label, info):
    pass

Use user-app virtual environment for inference

Problem: User applications will also need to run with a user-app-specific virtual environment.

Current state: The inference branch of the user application runs in the same environment (and process) as MONAI Label Server

Tracking externally added samples

  • Have separate folder for original and final labels respectively: labels_original and labels_final
  • Have watcher scan image folder to add to datastore
  • Use watcher to update and label deletions

inotify watch limit reached

Describe the bug
Not sure what exactly generated the error, but it happened when I added new images in the datastore folder while the server was running. Now I can't start the server again

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Install '....'
  3. Run commands '....'

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

Environment

Ensuring you use the relevant python executable, please paste the output of:

python -c 'import monai; monai.config.print_debug_info()'

Additional context

File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/starlette/routing.py", line 526, in lifespan
async for item in self.lifespan_context(app):
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/starlette/routing.py", line 467, in default_lifespan
await self.startup()
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/starlette/routing.py", line 502, in startup
await handler()
File "main.py", line 52, in startup_event
get_app_instance()
File "/home/adp20local/Documents/MONAILabel/monailabel/utils/others/generic.py", line 89, in get_app_instance
return app_instance(settings.APP_DIR, settings.STUDIES)
File "/home/adp20local/Documents/MONAILabel/monailabel/utils/others/app_utils.py", line 32, in app_instance
o = c(app_dir=app_dir, studies=studies)
File "/home/adp20local/Documents/MONAILabel/sample-apps/deepedit_heart/main.py", line 73, in init
resources=None,
File "/home/adp20local/Documents/MONAILabel/monailabel/interfaces/app.py", line 42, in init
self._datastore: Datastore = LocalDatastore(studies)
File "/home/adp20local/Documents/MONAILabel/monailabel/utils/datastore/local.py", line 110, in init
self._observer.start()
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/api.py", line 256, in start
emitter.start()
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/utils/init.py", line 93, in start
self.on_thread_start()
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 118, in on_thread_start
self._inotify = InotifyBuffer(path, self.watch.is_recursive)
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 35, in init
self._inotify = Inotify(path, recursive)
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 199, in init
self._add_dir_watch(path, recursive, event_mask)
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 396, in _add_dir_watch
self._add_watch(path, mask)
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 417, in _add_watch
Inotify._raise_error()
File "/home/adp20local/anaconda3/envs/monai_env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 429, in _raise_error
raise OSError(errno.ENOSPC, "inotify watch limit reached")
OSError: [Errno 28] inotify watch limit reached

[2021-05-21 16:27:20,183] [ERROR] (uvicorn.error) - Application startup failed. Exiting.

OHIF - Segmentation Table

Create generic segmentation table for add/modify/delete segments (refer AIAA plugin)
This is need to create labels for deepedit/segmentation tasks and show results

Default Recipe/Configs included for MONAILabel Apps

Clinician/Technically Novice user is abstracted from details of underlying logic of MONAILabel Apps that determines background actions like:

  • When to start training
  • When to stop training (acceptable Val score)
  • Some metrics to quantify and monitor that new derivative model for user data is getting better.

Support deepgrow pipeline

Support running pipeline which can

  1. Run deepgrow 3D and then select random points over result label mask and input to Deepgrow 2D for all eligible slices
  2. Support the above one for any 3D segmentation model

Support Inference Engines for other MSD datasets

Currently monailabel packages infererence engines for Deepgrow 2D/3D and segmentation_spleen
Lets add few more like:

  • Heart
  • Prostate
  • Brain
  • Liver
  • Lung etc..

This will help researches to kick start their APP using read-to-use inference engines.. or they can customize on top of them.

Add Custom Label Tag Options to Datastore

  • Having original and final tags is not sufficient to cover use cases
  • Alternatives
    • Consider removing original and final
    • Consider keeping above but adding custom enum

Multi Thread and async training support

Currently the basic run is single threaded...

  1. Support different device for inference (cuda-0)
  2. Support different device for training (cuda-1)
  3. Run training async
  4. Support fetching currently running training jobs
  5. Limit number of training job to max 1 (for version 0.1)

[Slicer] ITK reader - orthonormal direction cosines

This occurs sometimes when clicking on Next image button

Traceback (most recent call last):
File "/home/adp20local/Documents/MONAILabel/plugins/slicer/MONAILabel/MONAILabel.py", line 828, in onClickSegmentation
self.updateSegmentationMask(result_file, self.models[model].get("labels"))
File "/home/adp20local/Documents/MONAILabel/plugins/slicer/MONAILabel/MONAILabel.py", line 936, in updateSegmentationMask
labelImage = sitk.ReadImage(in_file)
File "/home/adp20local/Documents/viewers/Slicer-4.13.0-2021-04-26-linux-amd64/lib/Python/lib/python3.6/site-packages/SimpleITK-2.1.0.dev217-py3.6-linux-x86_64.egg/SimpleITK/extra.py", line 346, in ReadImage
return reader.Execute()
File "/home/adp20local/Documents/viewers/Slicer-4.13.0-2021-04-26-linux-amd64/lib/Python/lib/python3.6/site-packages/SimpleITK-2.1.0.dev217-py3.6-linux-x86_64.egg/SimpleITK/SimpleITK.py", line 7861, in Execute
return _SimpleITK.ImageFileReader_Execute(self)
RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /work/Preview/Slicer-0-build/ITK/Modules/IO/NIFTI/src/itkNiftiImageIO.cxx:1980:
ITK ERROR: ITK only supports orthonormal direction cosines. No orthonormal definition found!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.