GithubHelp home page GithubHelp logo

xuzhez / mapseg Goto Github PK

View Code? Open in Web Editor NEW
25.0 3.0 1.0 3.5 MB

[CVPR2024] MAPSeg: Unified Unsupervised Domain Adaptation for Heterogeneous Medical Image Segmentation Based on 3D Masked Autoencoding and Pseudo-Labeling

Python 100.00%

mapseg's Introduction

MAPSeg: Unified Unsupervised Domain Adaptation for Heterogeneous Medical Image Segmentation Based on 3D Masked Autoencoding and Pseudo-Labeling

CVPR 2024 / arXiv

A unified UDA framework for 3D medical image segmentation for several scenarios:

MAPseg can solver various problems in different settings

Built upon complementary masked autoencoding and pseudo-labeling:

Framework

Usage:

conda create --name mapseg --file requirements.txt
conda activate mapseg

For training:

python train.py --config=YOUR_PATH_TO_YAML

Training procedure:

MAE pretraining: We recommend training the encoder via Multi-scale 3D MAE for at least 300 epochs. For an example configuration file, please refer to here. We recommend leveraging large-scale unlabelled scans for MAE pretraining. We recommend storing each modality/domain in a separate folder; please refer to here. There are multiple criteria to split the domains, e.g., modality (CT/MRI), contrast (T1w/T2w), vendor (GE/Siemens), acquisition sequences (GRE, ZTE).

MPL UDA Finetuning: For an example configuration file for test-time UDA, please refer to here, for centralized UDA, check here. We recommend setting large_scale as True if there are at least 500 scans (including unlabelled scans) for both MAE and MPL. The pretrain_model configuration points to the absolute path of model after MAE pretraining. The data structure is similar to MAE and the details are here.

For inference:

python test.py # be sure to edit the test.py 

Parameters and data structure: There is a detailed explanation in /cfg/default.py.

Useful Suggestions:

More to be updated soon.

  1. The input image should have correct affine information (for orientation) in header. The data loader will automatically adjust it to the RAS space as defined in Nibabel (see more)
  2. If orientation information is no longer available, please manually check all scans (the data matrix) to ensure they are in the same orientation. This is extremely important in pseudo labeling (fine for MAE pretrain in different orientations).
  3. The data loader will erase all negative intensity to extract the boundary information appropriately. Please add an offset to CT to make it all positive if used. What I did is to add +1024 to all pixels and then set remaining negative pixels to 0.
  4. There are two versions of training script (for MPL only) provided. In our some other experiments, it appears trainV2 is more stable than the version introduced in paper. We will share more information soon. Using train.py for multi-scale 3D MAE pretraining.
  5. Unfortunately, we need more time to integrate FL into this code space. My collaborator (Yuhao) is working on it.
  6. For MPL, because of the memory limitation in our GPU, we have not tested on batch size over 1, and the current data loading may not work well for larger batch size. We are working to improve it.
  7. Directly using AMP does not really work for MAPSeg (degraded performance). There are some helpful discussions here. Adding gradient clipping might help
  8. More to be updated. We have some exciting news regarding MAPSeg's extension applications (beyond cardiac and brain), stay tuned!

Acknowledgements:

Some components are borrowed from existing excellent repos, including patchify/unpatchify from MAE, building block from 3D UNet, and DeepLabV3. We thank the authors for their open-source contribution!

Cite:

If you found our work helpful, please cite our work:

@InProceedings{Zhang_2024_CVPR,
author    = {Zhang, Xuzhe and Wu, Yuhao and Angelini, Elsa and Li, Ang and Guo, Jia and Rasmussen, Jerod M. and O'Connor, Thomas G. and Wadhwa, Pathik D. and Jackowski, Andrea Parolin and Li, Hai and Posner, Jonathan and Laine, Andrew F. and Wang, Yun},
title     = {MAPSeg: Unified Unsupervised Domain Adaptation for Heterogeneous Medical Image Segmentation Based on 3D Masked Autoencoding and Pseudo-Labeling},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month     = {June},
year      = {2024},
pages     = {5851-5862}}

mapseg's People

Contributors

xuzhez avatar

Stargazers

MxShun3301 avatar Minh-Hai Tran avatar Yang Jianhong avatar Sumin Kim avatar Elizabeth Nemeti avatar  avatar Jay avatar Liwen Wang avatar ManuC avatar HantaoZhang avatar  avatar JiayiChen815 avatar  avatar ZhangZiyu avatar Vinson avatar KaiZen avatar zyfone avatar Li Xiang avatar  avatar Yunheng WU avatar  avatar  avatar  avatar  avatar Xinzi He avatar

Watchers

Kostas Georgiou avatar  avatar  avatar

Forkers

lixiang007666

mapseg's Issues

Validation error when depth (z-axis) smaller than patch size 96

Dear authors,

Thank you for your prompt reply on resolving the training error on the custom dataset! However, I encountered another error while I am running on the custom dataset. May I also ask how to resolve this? I have disabled the validation, but I would very like to run the inference after the training is done! Thank you in advance.

Traceback (most recent call last):
  File "/home/sumin2/Documents/MAPSeg/train.py", line 127, in <module>
    save_best = train_solver.validation(epoch)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/Documents/MAPSeg/model/solver.py", line 493, in validation
    tmp_pred = self.infer_single_scan(tmp_scans)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/miniconda3/envs/mapseg/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/Documents/MAPSeg/model/solver.py", line 405, in infer_single_scan
    scan_patches, _, tmp_idx = util.patch_slicer(tmp_scans, tmp_scans, self.cfg.data.patch_size,
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/Documents/MAPSeg/model/utils/util.py", line 242, in patch_slicer
    for z_idx in z_steps:
  File "/home/sumin2/Documents/MAPSeg/model/utils/util.py", line 180, in _gen_indices
    assert i2 >= k, 'sample size has to be bigger than the patch size'
           ^^^^^^^
AssertionError: sample size has to be bigger than the patch size

Train custom data depth (z-axis) smaller than patch size 96

Thank you for your amazing work! I am trying to train on a custom dataset, and successfully trained MAE model. However, during MPL training, I am getting the following error as the depth is less than the patch size.

Traceback (most recent call last):
  File "/home/sumin2/Documents/MAPSeg/train.py", line 128, in <module>
    save_best = train_solver.validation(epoch)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/Documents/MAPSeg/model/solver.py", line 493, in validation
    tmp_pred = self.infer_single_scan(tmp_scans)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/miniconda3/envs/mapseg/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/Documents/MAPSeg/model/solver.py", line 405, in infer_single_scan
    scan_patches, _, tmp_idx = util.patch_slicer(tmp_scans, tmp_scans, self.cfg.data.patch_size,
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sumin2/Documents/MAPSeg/model/utils/util.py", line 243, in patch_slicer
    for z_idx in z_steps:
  File "/home/sumin2/Documents/MAPSeg/model/utils/util.py", line 180, in _gen_indices
    assert i2 >= k, f'sample size {i2} has to be bigger than the patch size {k}'
           ^^^^^^^
AssertionError: sample size 72 has to be bigger than the patch size 96

May I ask how I can resolve this?

Thank you very much.

询问代码开源时间

感谢您杰出的贡献,我对您的研究很有兴趣,不知道您的代码将何时开源?
祝好!

Release of code

Hi awesome work! Just kind ask when can release all code to facilitate reproduction?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.