GithubHelp home page GithubHelp logo

aimagelab-zip / alveolar_canal Goto Github PK

View Code? Open in Web Editor NEW
34.0 2.0 5.0 996 KB

This repository contains the material from the paper "Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation"

Python 100.00%
segmentation-network unet-pytorch inferior-alveolar-canal inferior-alveolar-nerve

alveolar_canal's Introduction

Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation

Side view

Front and side views of a densely annotated IAN

Introduction

This repository contains the material from the paper "Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation". In particular, this repo is dedicated to the 3D neural networks used to generate and segment the Inferior Alveolar Nerve (IAN). This nerve is oftentimes in close relation to the roots of molars, and its position must thus be carefully detailed before the surgical removal. As avoiding contact with the IAN is a primary concern during these operations, segmentation plays a key role in surgical preparations.

Citing our work

BibText

IAN Segmentation and Label Propagation

For the IAN segmentation, we adopted a modified version of U-NET 3D, enriched with a 2 pixels padding and an embedding of the coordinates of the sub-volumes fed in the net. Because of the heavy burden represented by manual annotation of segmentation ground thruth we also employed the same neural network to expand our dataset by having a dense annotation for all the volumes in our dataset in order to have more data for the training phase of the main task. The training phase of the segmentation network, is divided in 2 phase: first we use the sparse annotations with their generated ground thruth as a pretraining and then the real dense annotation as a finetuning.

Dataset

Before running this project, you need to download the dataset. Also look at this which has the code to generate the naive dense labels starting from the sparse annotations (Circular Expansion)

How to run

Clone this repository, create a python env for the project (optional) and activate it. Then install all the dependencies with pip

git clone [email protected]:AImageLab-zip/alveolar_canal.git
cd alveolar_canal
python -m venv env
source env/bin/activate
pip install -r requirements.txt

Run

Run the project as follows:

python main.py [-h] -c CONFIG [--verbose]

arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        the config file used to run the experiment
  --verbose             To log also to stdout

E.g. to run the generation experiment, execute:

python main.py --config configs/gen-training.yaml

YAML config files

You can find the config files used to obtain the best result in the config folder. Two files are needed: experiment.yaml, augmentations.yaml. For both the two tasks, the best config file is provided:

  • gen-training.yaml for the network which, from the sparse annotation, generate the dense labels
  • seg-pretrain.yaml which train the segmentation network only over the generated labels
  • seg-finetuning.yaml which train the segmentation network over the real dense labels Execute main.py with these 3 configs in this order to reproduce our results

Checkpoints

Download the pre-trained checkpoints here

experiment.yaml

experiment.yaml describe each part of the project, like the network/loss/optimizer, how to load data and so on:

# title of the experiment
title: canal_generator_train
# Where to output everything, in this path a folder with
# the same name as the title is created containing checkpoints,
# logs and a copy of the config used
project_dir: '/path/to/results'
seed: 47

# which experiment to execute: Segmentation or Generation
experiment:
  name: Generation

data_loader:
  dataset: /path/to/maxillo
  # null to use training_set, generated to used the generated dataset
  training_set: null
  # which preprocessing to use, see: preprocessing.yaml
  preprocessing: configs/preprocessing.yaml
  # which augmentations to use, see: augmentations.yaml
  augmentations: configs/augmentations.yaml
  background_suppression: 0
  batch_size: 2
  labels:
    BACKGROUND: 0
    INSIDE: 1
  mean: 0.08435
  num_workers: 8
  # shape of a single patch
  patch_shape:
  - 120
  - 120
  - 120
  # reshape of the whole volume before extracting the patches
  resize_shape:
  - 168
  - 280
  - 360
  sampler_type: grid
  grid_overlap: 0
  std: 0.17885
  volumes_max: 2100
  volumes_min: 0
  weights:
  - 0.000703
  - 0.999

# which network to use
model:
  name: PosPadUNet3D

loss:
  name: Jaccard

lr_scheduler:
  name: Plateau

optimizer:
  learning_rate: 0.1
  name: SGD

trainer:
  # Reload the last checkpoints?
  reload: True
  checkpoint: /path/to/checkpoints/last.pth
  # train the network
  do_train: True
  # do a single test of the network with the loaded checkpoints
  do_test: False
  # generate the synthetic dense dataset
  do_inference: False
  epochs: 100

preprocessing.yaml

preprocessing.yaml defines which type of preprocessing to use during training. One simple preprocessing file has been used for every experiment. The file should follow this structure:

Clamp:
  out_min: 0
  out_max: 2100
RescaleIntensity:
  out_min_max: !!python/tuple [0, 1]

augmentations.yaml

augmentations.yaml defines which type of augmentations use during training. Two different augmentations files have been used, one for the segmentation task, one for the generation task. The file should follow this structure:

RandomAffine:
  scales: !!python/tuple [0.5, 1.5]
  degrees: !!python/tuple [10, 10]
  isotropic: false
  image_interpolation: linear
  p: 0.5
RandomFlip:
  axes: 2
  flip_probability: 0.7

alveolar_canal's People

Contributors

emnlp23 avatar lucalumetti avatar potpov avatar vittoriopipoli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

alveolar_canal's Issues

Need canal inference

There is no canal inference in this repo, so i can't test pre-trained checkpoint
How can I test pre-trained model from whole volume data

Inference with another database

Hi,

I was trying to test your network and your wirghts on another database. But It fails each time...
Nevertheles, the size of each data (364,704,704) is bigger than the one you provided. Do you have any idea why it fails each time ?.

You will find attached one exemple of the database that I am trying to work with. Thank you

test

Best Regards,
Hamid FSIAN

The seg-pretraining issues

Hello author! I have an error in code reproduction that I would like to get your answer to. When I run the command python main.py --configs config/seg-pretraining.yaml, I get a divide by zero error, the error comes from expierments line 204 running continue operation on the whole dataset, what causes this? Because torch.sum(gt_count) is always equal to 0, and does synthetic_loader only use dense tags generated by gen-inference.yaml? Looking for your answer to the above error when trying to reproduce your work.

hello,a simple question

I wonder what is the gt that is used to train the Deep Label Expansion stage?From my observation,it is seemd that labels generated from torchio.LabelMap are used.Thanks very much!

NaN error in validation phase

I am trying to run this code by using maxillo dataset. But it occurs a error below .

Traceback (most recent call last):
File "/home/syu/Documents/maison/alveolar_canal/main.py", line 170, in
val_iou, val_dice = experiment.test(phase="Validation")
File "/home/syu/Documents/maison/alveolar_canal/experiments/experiment.py", line 276, in test
loss = self.loss(output.unsqueeze(0), gt.unsqueeze(0), partition_weights)
File "/home/syu/Documents/maison/alveolar_canal/losses/LossFactory.py", line 60, in call
raise ValueError(f'Loss {loss_name} has some NaN')
ValueError: Loss DiceLoss has some NaN

I'm wondering if this code and dataset work as written on the readme or if there are no corrupted files in this dataset.
Big congratulation ti your acivements in cvpr!

Error trying to run inference

Hello, I'm trying to run inference by executing the main.py with a configuration file based on the gen-inference-unet in the configs folder, I'm pointing my dataset to a folder with the images I want to segment in .npy format. I get the following error:

INFO:root:loading preprocessing
Traceback (most recent call last):
  File "/home/renan/anaconda3/envs/alveolar_canal/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
    return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'preprocessing'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/renan/anaconda3/envs/alveolar_canal/lib/python3.9/site-packages/munch/__init__.py", line 106, in __getattr__
    return self[k]
KeyError: 'preprocessing'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/renan/alveolar_canal/main.py", line 92, in <module>
    if config.data_loader.preprocessing is None:
  File "/home/renan/anaconda3/envs/alveolar_canal/lib/python3.9/site-packages/munch/__init__.py", line 108, in __getattr__
    raise AttributeError(k)
AttributeError: preprocessing

looking at the code there seems to be a section which should handle none preprocessing fields, but it doesn't seem to be working, here is my yml:

# title of the experiment
title: canal_generator_train
# Where to output everything, in this path a folder with
# the same name as the title is created containing checkpoints,
# logs and a copy of the config used
project_dir: './results'
seed: 47

# which experiment to execute: Segmentation or Generation
experiment:
  name: Segmentation

data_loader:
  dataset: ./data/MG_scan_test.nii.gz
  # null to use training_set, generated to used the generated dataset
  training_set: null
  # which augmentations to use, see: augmentations.yaml
  augmentations: configs/augmentations.yaml
  background_suppression: 0
  batch_size: 2
  labels:
    BACKGROUND: 0
    INSIDE: 1
  mean: 0.08435
  num_workers: 8
  # shape of a single patch
  patch_shape:
  - 120
  - 120
  - 120
  # reshape of the whole volume before extracting the patches
  resize_shape:
  - 168
  - 280
  - 360
  sampler_type: grid
  grid_overlap: 0
  std: 0.17885
  volumes_max: 2100
  volumes_min: 0
  weights:
  - 0.000703
  - 0.999

# which network to use
model:
  name: PosPadUNet3D

loss:
  name: Jaccard

lr_scheduler:
  name: Plateau

optimizer:
  learning_rate: 0.1
  name: SGD

trainer:
  # Reload the last checkpoints?
  reload: False
  checkpoint: ./checkpoints/seg-checkpoint.pth
  # train the network
  do_train: False
  # do a single test of the network with the loaded checkpoints
  do_test: False
  # generate the synthetic dense dataset
  do_inference: True
  epochs: 100

Any help is appreciated, thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.