GithubHelp home page GithubHelp logo

bowang-lab / u-mamba Goto Github PK

View Code? Open in Web Editor NEW
583.0 11.0 46.0 2.89 MB

U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation

Home Page: https://arxiv.org/abs/2401.04722

License: Apache License 2.0

Python 99.50% Shell 0.50%

u-mamba's Introduction

Official repository for U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation. Welcome to join our mailing list to get updates.

Installation

Requirements: Ubuntu 20.04, CUDA 11.8

  1. Create a virtual environment: conda create -n umamba python=3.10 -y and conda activate umamba
  2. Install Pytorch 2.0.1: pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118
  3. Install Mamba: pip install causal-conv1d>=1.2.0 and pip install mamba-ssm --no-cache-dir
  4. Download code: git clone https://github.com/bowang-lab/U-Mamba
  5. cd U-Mamba/umamba and run pip install -e .

sanity test: Enter python command-line interface and run

import torch
import mamba_ssm

network

visual_seg.mp4

Model Training

Download dataset here and put them into the data folder. U-Mamaba is built on the popular nnU-Net framework. If you want to train U-Mamba on your own dataset, please follow this guideline to prepare the dataset.

Preprocessing

nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity

Train 2D models

  • Train 2D U-Mamba_Bot model
nnUNetv2_train DATASET_ID 2d all -tr nnUNetTrainerUMambaBot
  • Train 2D U-Mamba_Enc model
nnUNetv2_train DATASET_ID 2d all -tr nnUNetTrainerUMambaEnc

Train 3D models

  • Train 3D U-Mamba_Bot model
nnUNetv2_train DATASET_ID 3d_fullres all -tr nnUNetTrainerUMambaBot
  • Train 3D U-Mamba_Enc model
nnUNetv2_train DATASET_ID 3d_fullres all -tr nnUNetTrainerUMambaEnc

Inference

  • Predict testing cases with U-Mamba_Bot model
nnUNetv2_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -d DATASET_ID -c CONFIGURATION -f all -tr nnUNetTrainerUMambaBot --disable_tta
  • Predict testing cases with U-Mamba_Enc model
nnUNetv2_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -d DATASET_ID -c CONFIGURATION -f all -tr nnUNetTrainerUMambaEnc --disable_tta

CONFIGURATION can be 2d and 3d_fullres for 2D and 3D models, respectively.

Remarks

  1. Path settings

The default data directory for U-Mamba is preset to U-Mamba/data. Users with existing nnUNet setups who wish to use alternative directories for nnUNet_raw, nnUNet_preprocessed, and nnUNet_results can easily adjust these paths in umamba/nnunetv2/path.py to update your specific nnUNet data directory locations, as demonstrated below:

# An example to set other data path,
base = '/home/user_name/Documents/U-Mamba/data'
nnUNet_raw = join(base, 'nnUNet_raw') # or change to os.environ.get('nnUNet_raw')
nnUNet_preprocessed = join(base, 'nnUNet_preprocessed') # or change to os.environ.get('nnUNet_preprocessed')
nnUNet_results = join(base, 'nnUNet_results') # or change to os.environ.get('nnUNet_results')
  1. AMP could lead to nan in the Mamba module. We also provide a trainer without AMP: https://github.com/bowang-lab/U-Mamba/blob/main/umamba/nnunetv2/training/nnUNetTrainer/nnUNetTrainerUMambaEncNoAMP.py

Paper

@article{U-Mamba,
    title={U-Mamba: Enhancing Long-range Dependency for Biomedical Image Segmentation},
    author={Ma, Jun and Li, Feifei and Wang, Bo},
    journal={arXiv preprint arXiv:2401.04722},
    year={2024}
}

Acknowledgements

We acknowledge all the authors of the employed public datasets, allowing the community to use these valuable resources for research purposes. We also thank the authors of nnU-Net and Mamba for making their valuable code publicly available.

u-mamba's People

Contributors

junma11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

u-mamba's Issues

How to do model inference and obtain the results in the paper

As the output folder only contains fold_all, I add a -f all after the provided inference command.

The command used for testing:
nnUNetv2_predict -i ./data/nnUNet_results/Dataset704_Endovis17 -o ./test_endo -d 704 -c 2d -tr nnUNetTrainerUMambaBot --disable_tta -f all

The results:
image

Predict error

Hello,I have this error when I predict the image.Why??
RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:105:
sitk::ERROR: Unable to determine ImageIO reader for "/mnt/workspace/szc/U-mamba/U-Mamba/data/nnUNet_raw/Dataset701_AbdomenCT/imagesVal/._FLARETs_0023_0000.nii.gz"

Available Docker Image for Conda env UMamba

Hi Everyone
Thanks to the autors for this great work.
For my scientific research, I am currently exploring U-Mamba and have created a public docker image that contains the complete Conda environment to run U-Mamba.
Docker : Umamba-docker
Just follow these steps when the docker image is open:

conda init
exec bash
conda activate umamba

Deadlock

Running the package with my custom dataset results in a deadlock.

2024-06-18 11:52:18.824864: unpacking dataset...
2024-06-18 11:58:07.176028: unpacking done...
2024-06-18 11:58:07.179740: do_dummy_2d_data_aug: False
2024-06-18 11:58:07.419208: Unable to plot network architecture:
2024-06-18 11:58:07.419345: No module named 'hiddenlayer'
2024-06-18 11:58:07.825781:
2024-06-18 11:58:07.825900: Epoch 0
2024-06-18 11:58:07.826053: Current learning rate: 0.01
/usr/lib/python3.10/multiprocessing/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
self.pid = os.fork()
using pin_memory on device 0

How to avoid large number of sequence length?

Hi, @JunMa11 . Thanks for your great work.
I have a small question related to the network setting.
Since the sequence length L is set to be the multiplication of C, H, W of the image patch according to the paper, then given an image patch such as 320x320, the C can be 32 if my understanding is correct according to the code, then L is 160x160x32=819.2K (after the first pooling) at the first scale of Unet which can be quite large.
Do I misunderstand some details? Or there are some strategies to avoid such a case?
Thanks again and look forward to your help :)

How to solve the problem of experiments stalling?

Hello,

When I train the model, the experiment stops at a certain epoch and doesn't continue training. The GPU usage is at 1% and the memory usage is 12GB, indicating that the experiment is still running. However, it stays stuck at the current epoch for an entire night, preventing the experiment from progressing. What could be the problem? Can you help explain this?
屏幕截图 2024-05-30 093003

Thank you.

What is DATASET_ID?

Hi,

nnUNetv2_plan_and_preprocess -d Dataset701_AbdomenCT --verify_dataset_integrity

nnUNetv2_plan_and_preprocess: error: argument -d: invalid int value: 'Dataset701_AbdomenCT'

Looking forward to your insights.

Best regards

Operational issues

Hello, I encountered an issue during the replication process. When running nnUNetv2 plan and process - d DATASETID -- verify_dataset_integrity, I encountered Configuration: 3d_losses
INFO: Configuration 3d_losses not found plans file nnUNetPlans.json of dataset Dataset004_Hippocampus. Skipping. Could you please solve the problem? Hope to receive your reply, thank you!

Residual bloc

Hello, to match with the paper, a residual block should :

apply convolution, then normalization, then activation function and then add the residual, two times. thus I would have guess that this should be written as :

def forward(self, x)
        residual = x
        y = self.conv1(x)
        y = self.act1(self.norm1(y))  
        
        y = y + residual
        residual = y
        
        y = self.norm2(self.conv2(y))
    	y = self.act2(y)
        y += residual
        
        return y

which is not the case in the residual block I've found in the repo where the residual is added before the activation of the second block such as :


   def forward(self, x):
        y = self.conv1(x)
        y = self.act1(self.norm1(y))  
        y = self.norm2(self.conv2(y))
        y += x
        return self.act2(y)

Slow training time (can be fixed)

Hi Jun,

awesome work! While playing with your repo I noticed that training times are WAY slower than they should be. When using the regular nnUNetTrainer, an epoch of Hippocampus takes 22s instead of 7-8 (on RTX 4090) even though none of the Mamba stuff should be involved.

I traced this back to the way you install pytorch. I recommend you change the instructions to
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
(taken straight from the pytorch website. cuda 11.8 is important as it won't work with cuda 12 due to the causal conv repo)

This has the following effect for me:

  • regular nnUNetTrainer on Hippocampus goes from 22s -> 7.5s per epoch
  • nnUNetTrainerUMambaEnc goes from >60s to 24s

Note that I only verified that trainings are running, please make sure everything works fine before changing that :-)

Best,
Fabian

复现问题

您好,我在复现过程出现问题,我在运行nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity时,出现Configuration: 3d_lowres...
INFO: Configuration 3d_lowres not found in plans file nnUNetPlans.json of dataset Dataset004_Hippocampus. Skipping.问题,请问该如何解决?希望得到您的回复,谢谢!

changing nnUNet base directories

Hello! Thanks for putting this tool together. I just wanted to bring up a bug that I noticed while testing this repo. I've used nnUNet before, so I placed my data in a different location than U-Mamba/data and changed the default nnUNet_raw/preprocessed/results directories using bash's export command. However, your custom script doesn't reflect those changes when you run any of the nnUNet commands, it still points nnUNet_raw to U-Mamba/data/nnUNet_raw (same with preprocessed and results). Just wanted to bring this to attention if other experienced nnUNet users try this out. Great work on the paper and model!

Can not import the old UMamba .pth from old-v0

Thank you for releasing the code and the model. It is very cool.

I try to import UMamba .pth file from the old-v0 file. But I face error. I think it is because the github code has been changed into a new version. If I want to use an old version of the UMamba_Enc and UMamba_Bot code, where should I look for?

RuntimeError: CUDA error: no kernel image is available for execution on the device

I encountered the following error while running the "nnUNetv2_train 701 3d_fullres all -tr nnUNetTrainerUMambaBot" in your dataset.

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

GPU is P40
cuda version is 12.0 by nvidia-smi

Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.rand(10).to("cuda")
tensor([0.5094, 0.2796, 0.1893, 0.6431, 0.7310, 0.0044, 0.7085, 0.6361, 0.3852,
        0.5175], device='cuda:0')
>>> torch.cuda.device_count()
2
>>> torch.cuda.is_available()
True
>>> torch.version.cuda
'11.7'

Package Version Editable project location


acvl-utils                    0.2
asttokens                     2.4.1
attrs                         23.2.0
Automat                       22.10.0
batchgenerators               0.25
buildtools                    1.0.6
causal-conv1d                 1.1.1
certifi                       2023.11.17
charset-normalizer            3.3.2
cmake                         3.28.1
comm                          0.2.1
connected-components-3d       3.12.4
constantly                    23.10.4
contourpy                     1.2.0
cycler                        0.12.1
dicom2nifti                   2.4.9
docopt                        0.6.2
dynamic-network-architectures 0.2
einops                        0.7.0
filelock                      3.13.1
fonttools                     4.47.2
fsspec                        2023.12.2
furl                          2.1.3
future                        0.18.3
graphviz                      0.20.1
greenlet                      3.0.3
huggingface-hub               0.20.2
hyperlink                     21.0.0
idna                          3.6
imagecodecs                   2024.1.1
imageio                       2.33.1
incremental                   22.10.0
Jinja2                        3.1.3
joblib                        1.3.2
kiwisolver                    1.4.5
lazy_loader                   0.3
linecache2                    1.0.0
lit                           17.0.6
mamba-ssm                     1.1.1
MarkupSafe                    2.1.3
matplotlib                    3.8.2
MedPy                         0.4.0
monai                         1.3.0
mpmath                        1.3.0
networkx                      3.2.1
nibabel                       5.2.0
ninja                         1.11.1.1
nnunetv2                      2.1.1        /media/DataB/ykw/U-Mamba/umamba
numpy                         1.26.3
nvidia-cublas-cu11            11.10.3.66
nvidia-cuda-cupti-cu11        11.7.101
nvidia-cuda-nvrtc-cu11        11.7.99
nvidia-cuda-runtime-cu11      11.7.99
nvidia-cudnn-cu11             8.5.0.96
nvidia-cufft-cu11             10.9.0.58
nvidia-curand-cu11            10.2.10.91
nvidia-cusolver-cu11          11.4.0.1
nvidia-cusparse-cu11          11.7.4.91
nvidia-nccl-cu11              2.14.3
nvidia-nvtx-cu11              11.7.91
opencv-python                 4.9.0.80
orderedmultidict              1.0.1
packaging                     23.2
pandas                        2.1.4
pillow                        10.2.0
pip                           23.3.1
pydicom                       2.4.4
pyparsing                     3.1.1
python-dateutil               2.8.2
python-gdcm                   3.0.23
pytz                          2023.3.post1
PyYAML                        6.0.1
redo                          2.0.4
regex                         2023.12.25
requests                      2.31.0
safetensors                   0.4.1
scikit-image                  0.22.0
scikit-learn                  1.3.2
scipy                         1.11.4
seaborn                       0.13.1
setuptools                    68.2.2
SimpleITK                     2.3.1
simplejson                    3.19.2
six                           1.16.0
SQLAlchemy                    2.0.25
sympy                         1.12
threadpoolctl                 3.2.0
tifffile                      2023.12.9
tokenizers                    0.15.0
torch                         2.0.1
torchvision                   0.15.2
tqdm                          4.66.1
traceback2                    1.4.0
traitlets                     5.14.1
transformers                  4.36.2
triton                        2.0.0
Twisted                       23.10.0
typing_extensions             4.9.0
tzdata                        2023.4
unittest2                     1.1.0
urllib3                       2.1.0
wheel                         0.41.2
yacs                          0.1.8
zope.interface                6.1

median_image_size_in_voxels

Your 'median_image_size_in_voxels' field now has a value of [96, 310,310], right?
I want to change the value of the 'median_image_size_in_voxels' field in 3D lowres U-Net configuration to [96, 96, 96]. Where can I modify it?
I can't find a file in the nnUNetv2 framework that stores configuration information, such as nnUNetPlans.json.

problem in testing

I can only get high dice when training in" fold all",but it doesn't work when testing. What should I do to find the best configuration? Just
turning "(0,1,2,3,4)" into "all," in "find_best_configuration.py" and pythoning it don't work.

Why Mamba layers only in encoder?

Hello,

I am sorry if this is a stupid question but it is not quite clear to me why the Mamba layers are only applied in the encoder and not the decoder as well. I found the paper unclear in that regard.

Thank you

the error of "pip install -e . "

WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/batchgenerators/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/batchgenerators/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/batchgenerators/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/batchgenerators/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/batchgenerators/
ERROR: Could not find a version that satisfies the requirement argparse (from unittest2) (from versions: none) ERROR: No matching distribution found for argparse

Argument passed to at() was not in the map.

I'm having problems replacing Umaba's encoder and decoder with nnUNet's original decoder:

Argument passed to at() was not in the map.

And when this problem occurs, the pseudo-loss stays at 0.
Have you seen this problem before? Looking forward to your reply!

Wrong config in dataset.json

Hi, Jun. Thanks for your awesome work.

However, dataset.json in Dataset702_AbdomenMR may have some problems. It leads me the following error during data preprocessing:

(umamba) fgldlb@fgldlb-Precision-Tower-7910:~/Documents/mamba/U-Mamba/data$ nnUNetv2_plan_and_preprocess -d 702 --verify_dataset_i
ntegrity
Fingerprint extraction...
Dataset702_AbdomenMR
Traceback (most recent call last):
  File "/usr/local/anaconda3/envs/umamba/bin/nnUNetv2_plan_and_preprocess", line 33, in <module>
    sys.exit(load_entry_point('nnunetv2', 'console_scripts', 'nnUNetv2_plan_and_preprocess')())
  File "/home/fgldlb/Documents/mamba/U-Mamba/umamba/nnunetv2/experiment_planning/plan_and_preprocess_entrypoints.py", line 182, in plan_and_preprocess_entry
    extract_fingerprints(args.d, args.fpe, args.npfp, args.verify_dataset_integrity, args.clean, args.verbose)
  File "/home/fgldlb/Documents/mamba/U-Mamba/umamba/nnunetv2/experiment_planning/plan_and_preprocess_api.py", line 47, in extract_fingerprints
    extract_fingerprint_dataset(d, fingerprint_extractor_class, num_processes, check_dataset_integrity, clean,
  File "/home/fgldlb/Documents/mamba/U-Mamba/umamba/nnunetv2/experiment_planning/plan_and_preprocess_api.py", line 30, in extract_fingerprint_dataset
    verify_dataset_integrity(join(nnUNet_raw, dataset_name), num_processes)
  File "/home/fgldlb/Documents/mamba/U-Mamba/umamba/nnunetv2/experiment_planning/verify_dataset_integrity.py", line 155, in verify_dataset_integrity
    assert len(dataset) == expected_num_training, 'Did not find the expected number of training cases ' \
AssertionError: Did not find the expected number of training cases (50). Found 60 instead.
Examples: ['amos_0507', 'amos_0508', 'amos_0510', 'amos_0514', 'amos_0517']

and the following changes in data/nnUNet_raw/Dataset702_AbdomenMR/dataset.json work for me:

{
    "channel_names": {
        "0": "MR"
    },
    "labels": {
        "background": 0,
        "liver": 1,
        "right kidney": 2,
        "spleen": 3,
        "pancreas": 4,
        "aorta": 5,
        "inferior vena cava": 6,
        "right adrenal gland": 7,
        "left adrenal gland": 8,  // "right adrenal gland" ==> "left adrenal gland"
        "gallbladder": 9,
        "esophagus": 10,
        "stomach": 11,
        "duodenum": 12,
        "left kidney": 13
    },
    "numTraining": 60,   // 50 -> 60
    "file_ending": ".nii.gz",
    "name": "Dataset702_AbdomenMR",
    "description": "This dataset was from MICCAI AMOS 2022 Challenge. The original dataset contained 60 annotation cases. We annotated another 50 MRI scans as the testing set. The annotations were generated by radiologists with the assistance of MedSAM and ITK-SNAP."
}

The problem of left adrenal gland only appeared in google drive files. But both files in git or google have wrong numTraining.

Prediction-RuntimeError: Some background workers are no longer alive

Hello, I encountered an issue when validating the 701 dataset, the program throws an error after predicting 9 data points.
The problem arises when using the Python multiprocessing module for parallel computation. Based on the provided error messages, there are two main issues: RuntimeError: Some background workers are no longer alive and multiprocessing.managers.RemoteError and KeyError. How can this be resolved?

Error message as follows:

Predicting FLARETs_0010:
perform_everything_on_device: True
Traceback (most recent call last):
File "/root/miniconda3/envs/umamba/bin/nnUNetv2_predict", line 33, in
sys.exit(load_entry_point('nnunetv2', 'console_scripts', 'nnUNetv2_predict')())
File "/root/onethingai-tmp/U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 831, in predict_entry_point
predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
File "/root/onethingai-tmp/U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 250, in predict_from_files
return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
File "/root/onethingai-tmp/U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 366, in predict_from_data_iterator
proceed = not check_workers_alive_and_busy(export_pool, worker_list, r, allowed_num_queued=2)
File "/root/onethingai-tmp/U-Mamba/umamba/nnunetv2/utilities/file_path_utilities.py", line 103, in check_workers_alive_and_busy
raise RuntimeError('Some background workers are no longer alive')
RuntimeError: Some background workers are no longer alive
Process SpawnProcess-8:
Traceback (most recent call last):
File "/root/miniconda3/envs/umamba/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/root/miniconda3/envs/umamba/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/root/onethingai-tmp/U-Mamba/umamba/nnunetv2/inference/data_iterators.py", line 57, in preprocess_fromfiles_save_to_queue
raise e
File "/root/onethingai-tmp/U-Mamba/umamba/nnunetv2/inference/data_iterators.py", line 50, in preprocess_fromfiles_save_to_queue
target_queue.put(item, timeout=0.01)
File "", line 2, in put
File "/root/miniconda3/envs/umamba/lib/python3.10/multiprocessing/managers.py", line 833, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:

Traceback (most recent call last):
File "/root/miniconda3/envs/umamba/lib/python3.10/multiprocessing/managers.py", line 260, in serve_client
self.id_to_local_proxy_obj[ident]
KeyError: '7ff7fdd1f460'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/miniconda3/envs/umamba/lib/python3.10/multiprocessing/managers.py", line 262, in serve_client
raise ke
File "/root/miniconda3/envs/umamba/lib/python3.10/multiprocessing/managers.py", line 256, in serve_client
obj, exposed, gettypeid = id_to_obj[ident]
KeyError: '7ff7fdd1f460'

Install have some error

Install Pytorch 2.0.1: pip install torch==2.0.1 torchvision==0.15.2
Install Mamba: pip install causal-conv1d==1.1.1 and pip install mamba-ssm

The torch version is 2.0.1, but the causal-conv1d==1.1.1 is for torch 1.12?

preprocessing

Hello, may I ask you to provide the following preprocessing command,

  1. How exactly should it be used?
  2. What is the data set number? Is it the specific name of the data set?

nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity

Test sets don't work well. Does the test set data need the same preprocessing as the training set, or is the test data read directly from image_raw?

Test sets don't work well. Does the test set data need the same preprocessing as the training set, or is the test data read directly from image_raw?
Answers would be appreciated, thank you very much!
测试效果不好。测试集数据是否需要进行与训练集相同的预处理,还是测试数据直接从 image_raw 读取?
希望能给出答案,非常感谢!

Data Output for Inference

When running inference on a trained model, where can I find data results? I see the output images, but is there more that is produced?

import error

hi, thanks for your excellent work. When I run nnUNetv2_train 1 3d_fullres all -tr nnUNetTrainerUMambaBot
imageI don't how to solve it.Could you give me some advise?

error:nnUNetv2_plan_and_preprocess

Fingerprint extraction...
Traceback (most recent call last):
File "/root/anaconda3/envs/umamba/bin/nnUNetv2_plan_and_preprocess", line 33, in
sys.exit(load_entry_point('nnunetv2==2.1.1', 'console_scripts', 'nnUNetv2_plan_and_preprocess')())
File "/root/anaconda3/envs/umamba/lib/python3.10/site-packages/nnunetv2-2.1.1-py3.10.egg/nnunetv2/experiment_planning/plan_and_preprocess_entrypoints.py", line 182, in plan_and_preprocess_entry
extract_fingerprints(args.d, args.fpe, args.npfp, args.verify_dataset_integrity, args.clean, args.verbose)
File "/root/anaconda3/envs/umamba/lib/python3.10/site-packages/nnunetv2-2.1.1-py3.10.egg/nnunetv2/experiment_planning/plan_and_preprocess_api.py", line 47, in extract_fingerprints
extract_fingerprint_dataset(d, fingerprint_extractor_class, num_processes, check_dataset_integrity, clean,
File "/root/anaconda3/envs/umamba/lib/python3.10/site-packages/nnunetv2-2.1.1-py3.10.egg/nnunetv2/experiment_planning/plan_and_preprocess_api.py", line 26, in extract_fingerprint_dataset
dataset_name = convert_id_to_dataset_name(dataset_id)
File "/root/anaconda3/envs/umamba/lib/python3.10/site-packages/nnunetv2-2.1.1-py3.10.egg/nnunetv2/utilities/dataset_name_id_conversion.py", line 48, in convert_id_to_dataset_name
raise RuntimeError(f"Could not find a dataset with the ID {dataset_id}. Make sure the requested dataset ID "
RuntimeError: Could not find a dataset with the ID 2. Make sure the requested dataset ID exists and that nnU-Net knows where raw and preprocessed data are located (see Documentation - Installation). Here are your currently defined folders:
nnUNet_preprocessed=/data1/dataset/nnUNet_preprocessed
nnUNet_results=/data1/dataset/nnUNet_results
nnUNet_raw=/data1/dataset/nnUNet_raw
If something is not right, adapt your environment variables.

Worse results of the released model on Dataset701_AbdomenCT

Thank you for releasing the code and the model.

I have some questions on the testing results of Dataset701_AbdomenCT, the quantitive results are much worse than the results reported in your paper. I got 0.5408956923076923 DSC and 0.5480656923076923 by using your released model directly for inference.
I also checked and visualized the ground truth and prediction on the test set,
image

BTW, it seems the images in the train and test set of Dataset701_AbdomenCT are collected by following different settings.

image

I wonder am I doing something wrong or I missed some preprocessing steps, or I have just downloaded the wrong dataset?

Thanks!

permission error

[It has been solved!]
error: could not create 'nnunetv2.egg-info': Permission denied

How can I solve this error?
-Ubuntu22
-cuda12.1

Thank you in advance!

Regarding Randomness

Hello,

Your work is very interesting and I have a question about the code that I'd like to discuss with you. Despite fixing all the random seeds, I'm still observing randomness in the results of my runs. Have you encountered this kind of issue before?

Nan When training

Hi,

Excellent work on the U-Mamba model! I have been attempting to train U-Mamba using my own datasets, which include both 2D and 3D data. However, I've faced an issue where the training process sometimes results in a 'nan' training loss despite trying different datasets. Have you experienced this issue during your training of U-Mamba? I used the nnUNetTrainerUMambaEnc trainer for this process.

Looking forward to your insights.

Best regards,
Leuan

训练loss从30轮开始变为nan

2024-05-06 06:49:37.241385: Epoch 29
2024-05-06 06:49:37.241527: Current learning rate: 0.00974
2024-05-06 06:52:36.636003: train_loss -0.5247
2024-05-06 06:52:36.636215: val_loss -0.6027
2024-05-06 06:52:36.636309: Pseudo dice [0.5292, 0.0, 0.4787, nan, 0.0, 0.0]
2024-05-06 06:52:36.636381: Epoch time: 179.4 s
2024-05-06 06:52:37.962367:
2024-05-06 06:52:37.962497: Epoch 30
2024-05-06 06:52:37.962595: Current learning rate: 0.00973
2024-05-06 06:55:37.388743: train_loss -0.4569
2024-05-06 06:55:37.388954: val_loss -0.5265
2024-05-06 06:55:37.389062: Pseudo dice [0.4189, 0.0, 0.3591, nan, 0.0, 0.0]
2024-05-06 06:55:37.389136: Epoch time: 179.43 s
2024-05-06 06:55:38.751563:
2024-05-06 06:55:38.751762: Epoch 31
2024-05-06 06:55:38.751870: Current learning rate: 0.00972
2024-05-06 06:58:34.520405: train_loss nan
2024-05-06 06:58:34.520603: val_loss nan
2024-05-06 06:58:34.520698: Pseudo dice [0.0199, 0.0, 0.0022, nan, 0.0, 0.0]
2024-05-06 06:58:34.520773: Epoch time: 175.77 s
2024-05-06 06:58:35.810734:
2024-05-06 06:58:35.810878: Epoch 32
2024-05-06 06:58:35.810974: Current learning rate: 0.00971
2024-05-06 07:01:28.559406: train_loss nan
2024-05-06 07:01:28.559933: val_loss nan
2024-05-06 07:01:28.560041: Pseudo dice [0.0, 0.0, 0.0, nan, 0.0, 0.0]
2024-05-06 07:01:28.560115: Epoch time: 172.75 s
2024-05-06 07:01:29.910507:
2024-05-06 07:01:29.910718: Epoch 33
2024-05-06 07:01:29.910861: Current learning rate: 0.0097
2024-05-06 07:04:21.132929: train_loss nan
2024-05-06 07:04:21.133433: val_loss nan
2024-05-06 07:04:21.133681: Pseudo dice [0.0, 0.0, 0.0, nan, 0.0, 0.0]
2024-05-06 07:04:21.133753: Epoch time: 171.22 s
2024-05-06 07:04:22.612751:
2024-05-06 07:04:22.612866: Epoch 34
2024-05-06 07:04:22.612957: Current learning rate: 0.00969
2024-05-06 07:07:13.875252: train_loss nan
2024-05-06 07:07:13.875460: val_loss nan
2024-05-06 07:07:13.875552: Pseudo dice [0.0, 0.0, 0.0, nan, 0.0, 0.0]
2024-05-06 07:07:13.875623: Epoch time: 171.26 s

progress

raise RuntimeError(f"Could not find a dataset with the ID {dataset_id}. Make sure the requested dataset ID "

I am a rookie in the field of medical image segmentation,I encountered this error when using nnunetV2,Here are the environment variables I have set。
image

And I can read the corresponding folder through instructions
image

And I wrote a Python file that also indicates that I can read the dataset. json file by setting the environment variables
image

How should I solve this problem?I really hope to receive your help!!!thanks!!

How to modify the model config?

Hello,

When i was reviewing the code of 'ConfigurationManager', I cant find the details of model parameter config.
if you can help me, I will appreciate it very much!

Thank you.

problem in inference

There are 30 cases in the source folder
I am process 0 out of 1 (max process ID is 0, we start counting with 0!)
There are 30 cases that I would like to predict
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
Traceback (most recent call last):
File "umamba/bin/nnUNetv2_predict", line 33, in
sys.exit(load_entry_point('nnunetv2', 'console_scripts', 'nnUNetv2_predict')())
File "U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 833, in predict_entry_point
predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
File "U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 250, in predict_from_files
return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
File "U-Mamba/umamba/nnunetv2/inference/predict_from_raw_data.py", line 343, in predict_from_data_iterator
for preprocessed in data_iterator:
File "U-Mamba/umamba/nnunetv2/inference/data_iterators.py", line 109, in preprocessing_iterator_fromfiles
raise RuntimeError('Background workers died. Look for the error message further up! If there is '
RuntimeError: Background workers died. Look for the error message further up! If there is none then your RAM was full and the worker was killed by the OS. Use fewer workers or get more RAM in that case!

Is this due to RAM or this Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it?

Standard nnUNet environment variables overwritten

In my setup the (nnUNetv2) paths.py contain the lines:

base = join(os.sep.join(file.split(os.sep)[:-3]), 'data') # or you can set your own path, e.g., base = '/home/user_name/Documents/U-Mamba/data'
nnUNet_raw = join(base, 'nnUNet_raw') # os.environ.get('nnUNet_raw')
nnUNet_preprocessed = join(base, 'nnUNet_preprocessed') # os.environ.get('nnUNet_preprocessed')
nnUNet_results = join(base, 'nnUNet_results') # os.environ.get('nnUNet_results')

where the nnUNet environment variables are ignored and redefined to some subdirectory of the base (code) directory. Took me a while why U-Mamba did not see my data...

TypeError in causal_conv1d_fwd() During nnUNetv2 3D Training Execution

When training the U-Mamba Enc model on the Brats2021 dataset, an issue arises when trying to perform a forward pass through the Mamba layer in the network architecture. It appears that the arguments passed to causal_conv1d_fwd() do not match the expected types or structure. Here is the error:

This is the configuration used by this training:
Configuration name: 3d_fullres
 {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [128, 128, 128], 'median_image_size_in_voxels': [140.0, 171.0, 137.0], 'spacing': [1.0, 1.0, 1.0], 'normalization_schemes': ['ZScoreNormalization', 'ZScoreNormalization', 'ZScoreNormalization', 'ZScoreNormalization'], 'use_mask_for_norm': [True, True, True, True], 'UNet_class_name': 'PlainConvUNet', 'UNet_base_num_features': 32, 'n_conv_per_stage_encoder': [2, 2, 2, 1, 1, 1], 'n_conv_per_stage_decoder': [2, 2, 2, 1, 1], 'num_pool_per_axis': [5, 5, 5], 'pool_op_kernel_sizes': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'unet_max_num_features': 320, 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'batch_dice': False} 

These are the global plan.json settings:
 {'dataset_name': 'Dataset137_BraTS2021', 'plans_name': 'nnUNetPlans', 'original_median_spacing_after_transp': [1.0, 1.0, 1.0], 'original_median_shape_after_transp': [140, 171, 137], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'experiment_planner_used': 'ExperimentPlanner', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 95242.25, 'mean': 871.816650390625, 'median': 407.0, 'min': 0.10992202162742615, 'percentile_00_5': 55.0, 'percentile_99_5': 5825.0, 'std': 2023.5313720703125}, '1': {'max': 1905559.25, 'mean': 1698.2144775390625, 'median': 552.0, 'min': 0.0, 'percentile_00_5': 47.0, 'percentile_99_5': 8322.0, 'std': 18787.4140625}, '2': {'max': 4438107.0, 'mean': 2141.349365234375, 'median': 738.0, 'min': 0.0, 'percentile_00_5': 110.0, 'percentile_99_5': 10396.0, 'std': 45159.37890625}, '3': {'max': 580014.3125, 'mean': 995.436279296875, 'median': 512.3143920898438, 'min': 0.0, 'percentile_00_5': 108.0, 'percentile_99_5': 11925.0, 'std': 4629.87939453125}}} 

2024-03-29 22:38:59.103773: unpacking dataset...
2024-03-29 22:38:59.715880: unpacking done...
2024-03-29 22:38:59.716621: do_dummy_2d_data_aug: False
2024-03-29 22:38:59.732308: Unable to plot network architecture:
2024-03-29 22:38:59.732390: No module named 'hiddenlayer'
2024-03-29 22:38:59.737705: 
2024-03-29 22:38:59.737803: Epoch 0
2024-03-29 22:38:59.737955: Current learning rate: 0.01
using pin_memory on device 0
Traceback (most recent call last):
  File "/usr/local/bin/nnUNetv2_train", line 33, in <module>
    sys.exit(load_entry_point('nnunetv2', 'console_scripts', 'nnUNetv2_train')())
  File "/content/U-Mamba/umamba/nnunetv2/run/run_training.py", line 268, in run_training_entry
    run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
  File "/content/U-Mamba/umamba/nnunetv2/run/run_training.py", line 204, in run_training
    nnunet_trainer.run_training()
  File "/content/U-Mamba/umamba/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1258, in run_training
    train_outputs.append(self.train_step(next(self.dataloader_train)))
  File "/content/U-Mamba/umamba/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 900, in train_step
    output = self.network(data)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/U-Mamba/umamba/nnunetv2/nets/UMambaEnc_3d.py", line 478, in forward
    skips = self.encoder(x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/U-Mamba/umamba/nnunetv2/nets/UMambaEnc_3d.py", line 287, in forward
    x = self.mamba_layers[s](x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "/content/U-Mamba/umamba/nnunetv2/nets/UMambaEnc_3d.py", line 89, in forward
    out = self.forward_patch_token(x)
  File "/content/U-Mamba/umamba/nnunetv2/nets/UMambaEnc_3d.py", line 63, in forward_patch_token
    x_mamba = self.mamba(x_norm)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/mamba_ssm/modules/mamba_simple.py", line 146, in forward
    out = mamba_inner_fn(
  File "/usr/local/lib/python3.10/dist-packages/mamba_ssm/ops/selective_scan_interface.py", line 317, in mamba_inner_fn
    return MambaInnerFn.apply(xz, conv1d_weight, conv1d_bias, x_proj_weight, delta_proj_weight,
  File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/usr/local/lib/python3.10/dist-packages/torch/cuda/amp/autocast_mode.py", line 98, in decorate_fwd
    return fwd(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/mamba_ssm/ops/selective_scan_interface.py", line 187, in forward
    conv1d_out = causal_conv1d_cuda.causal_conv1d_fwd(
TypeError: causal_conv1d_fwd(): incompatible function arguments. The following argument types are supported:
    1. (arg0: torch.Tensor, arg1: torch.Tensor, arg2: Optional[torch.Tensor], arg3: Optional[torch.Tensor], arg4: bool) -> torch.Tensor

GPU memory

Hello!Your work is great!
I want to know that is a 24G NVIDIA Geforce RTX 3090 GPU enough to run all the experiments? I encounter the OOM problem.

Selective Scan Import Issue

Hello,

I made a new environment and installed Pytorch 2.1 with cuda 11.8, alongside the recommended causal-conv1d and mamba-ssm.
However, the model does not train because of 'selective_scan'. Could you help me with this?

This is the full error:
Traceback (most recent call last):
File "/rds/general/user/kp4718/home/code/MedMamba/trainpynew.py", line 129, in
main()
File "/rds/general/user/kp4718/home/code/MedMamba/trainpynew.py", line 88, in main
outputs = net(images)
^^^^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/code/MedMamba/MedMamba.py", line 734, in forward
x = self.forward_backbone(x)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/code/MedMamba/MedMamba.py", line 730, in forward_backbone
x = layer(x)
^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/code/MedMamba/MedMamba.py", line 570, in forward
x = blk(x)
^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/code/MedMamba/MedMamba.py", line 503, in forward
x = input_right + self.drop_path(self.self_attention(self.ln_1(input_right)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/anaconda3/envs/cleanmamba/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/code/MedMamba/MedMamba.py", line 464, in forward
y1, y2, y3, y4 = self.forward_core(x)
^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/kp4718/home/code/MedMamba/MedMamba.py", line 379, in forward_corev0
self.selective_scan = selective_scan_fn
^^^^^^^^^^^^^^^^^
NameError: name 'selective_scan_fn' is not defined

Batch_size ?

Because my GPU memory is not enough, I want to modify the batch_size. What is the batch_size set to, and where can I modify the batch_size in the code files?

TypeError in selective_scan_interface of mamba-ssm

Hi Jun, awesome work! I faced an issue while running the 3d_fullres training

conv1d_out = causal_conv1d_cuda.causal_conv1d_fwd(                                                                                                                                                                                                         
TypeError: causal_conv1d_fwd(): incompatible function arguments. The following argument types are supported:                                                                                                                                                   
    1. (arg0: torch.Tensor, arg1: torch.Tensor, arg2: Optional[torch.Tensor], arg3: Optional[torch.Tensor], arg4: bool) -> torch.Tensor  

Stand-alone

Great performance on ULS challenge.

I was wondering if there is a non-nnUNet implementation of the network architecture for 2D, and 3D-UMamba (Stand-alone).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.