GithubHelp home page GithubHelp logo

dg-tta's Introduction

DG-TTA: Out-of-domain medical image segmentation through Domain Generalization and Test-Time Adaptation

Installation

We have a package available on pypi. Just run:

pip3 install dg-tta

Optionally, you can install wandb to log results to your dashboard.

nnUNet dependency

The nnUNet framework will be installed automatically alongside DG-TTA. Please refer to https://github.com/MIC-DKFZ/nnUNet to prepare your datasets. DG-TTA needs datasets to be prepared according to the v2 version of nnUNet.

Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211.

Usage

Run dgtta -h from your commandline interface to get started. There are four basic commands available:

  1. dgtta inject_trainers: This will copy our specialized trainers with DG techniques and make them available in the nnUNet framework. Available trainers are:
  • nnUNetTrainer_GIN (optimized for 1.5mm spacing)
  • nnUNetTrainer_MIND (optimized for 1.5mm spacing)
  • nnUNetTrainer_GIN_MIND (optimized for 1.5mm spacing)
  • nnUNetTrainer_GIN_MultiRes (optimized for 1.5, 3.0, 6.0 and 9.0mm spacing)
  • nnUNetTrainer_MIND_MultiRes (optimized for 1.5, 3.0, 6.0 and 9.0mm spacing)
  • nnUNetTrainer_GIN_MIND_MultiRes (optimized for 1.5, 3.0, 6.0 and 9.0mm spacing)
  1. dgtta pretrain: Use this command to pre-train a model on a (CT) dataset with one of our trainers.
  2. dgtta prepare_tta: After pre-training, prepare TTA by specifying the source and target dataset
  3. dgtta run_tta: After preparation, you can run TTA on a target (MRI) dataset and evaluate how well the model bridged the domain gap.
  4. If you want to perform TTA without pre-training, you can skip step 2) and use our pre-trained models (pre-trained on the TotalSegmentator dataset dataset)

Examples

Prepare a paths.sh file which exports the following variables:

#!/usr/bin/bash
export nnUNet_raw="/path/to/dir"
export nnUNet_preprocessed="/path/to/dir"
export nnUNet_results="/path/to/dir"
export DG_TTA_ROOT="/path/to/dir"
  1. Use case: Get to know the tool
  • source paths.sh && dgtta
  1. Use case: Pre-train a GIN_MIND model on dataset 802 in nnUNet
  • source paths.sh && dgtta inject_trainers --num_epochs 150
  • source paths.sh && dgtta pretrain -tr nnUNetTrainer_GIN_MIND 802 3d_fullres 0
  1. Use case: Run TTA on dataset 678 for the pre-trained model of step 2)
  • Inject trainers: source paths.sh && dgtta inject_trainers
  • Prepare TTA: source paths.sh && dgtta prepare_tta 802 678 --pretrainer nnUNetTrainer_GIN --pretrainer nnUNetTrainer_GIN_MIND --pretrainer_config 3d_fullres --pretrainer_fold 0 --tta_dataset_bucket imagesTrAndTs
  • Now inspect and change the plan.json (see console output of preparation). E.g. remove some samples on which you do not want to perform TTA on, change the number of TTA epochs etc.
  • Also inspect the notebook inside the plans folder and visualize the dataset orientation. Modify functions of modifier_functions.py as explained in the notebook to get the input/output orientation of the TTA data right.
  • Run TTA: source paths.sh && dgtta run_tta 802 678 --pretrainer nnUNetTrainer_GIN_MIND --pretrainer_config 3d_fullres --pretrainer_fold 0
  • Find the results inside the DG_TTA_ROOT directory
  1. Use case: Run TTA on dataset 678 with our pre-trained GIN_MIND model:
  • Inject trainers: source paths.sh && dgtta inject_trainers
  • Prepare TTA: source paths.sh && dgtta prepare_tta TS104_GIN_MIND 678 --tta_dataset_bucket imagesTrAndTs
  • Run TTA: source paths.sh && dgtta run_tta TS104_GIN_MIND 678
  1. All of our pre-trained models (use in case 4):
  • TS104_GIN
  • TS104_MIND
  • TS104_GIN_MIND
  • TS104_GIN_MultiRes
  • TS104_MIND_MultiRes
  • TS104_GIN_MIND_MultiRes

Please refer to our work

If you used DG-TTA, please cite:

Weihsbach, C., Kruse, C. N., Bigalke, A., & Heinrich, M. P. (2023). DG-TTA: Out-of-domain medical image segmentation through Domain Generalization and Test-Time Adaptation. arXiv preprint arXiv:2312.06275.

https://arxiv.org/abs/2312.06275

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.