GithubHelp home page GithubHelp logo

hyzwj / meshsegnet Goto Github PK

View Code? Open in Web Editor NEW

This project forked from tai-hsien/meshsegnet

0.0 0.0 0.0 95.43 MB

PyTorch version of MeshSegNet for tooth segmentation of intraoral scans (point cloud/mesh). The code also includes visdom for training visualization.

Python 24.78% C++ 62.93% MATLAB 12.29%

meshsegnet's Introduction

MeshSegNet: Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surface from 3D Intraoral Scanners

Created by Chunfeng Lian, Li Wang, Tai-Hsien Wu, Fan Wang, Pew-Thian Yap, Ching-Chang Ko, and Dinggang Shen

Introduction

This work is the pytorch implementation of MeshSegNet, which has been published in IEEE Transactions on Medical Imaging (https://ieeexplore.ieee.org/abstract/document/8984309) and MICCAI 2019 (https://link.springer.com/chapter/10.1007/978-3-030-32226-7_93). MeshSegNet is used to precisely label teeth on digitalized 3D dental surface models acquired by Intraoral scanners (IOSs).

In this repository, there are three main python scripts (steps 1 to 3) and three optional python scripts (steps 3-1, 4, and 5). Unfortunately, we are unable to provide the data. Please see below for the detailed explanation of codes.

Step 1 Data Augmentation

In order to increase the training dataset, we first augment the available intraoral scans (i.e., meshes) by 1) random rotation, 2) random translation, and 3) random rescaling of each mesh in reasonable ranges.

In this work, our intraoral scans are stored as VTP (VTK polygonal data) format. I have designed a customized simplified package called “easy_mesh_vtk” for reading and manipulate VTP files. Please refer to https://github.com/Tai-Hsien/easy_mesh_vtk. In our work, we have 36 intraoral scans, and all of these scans have been downsampled by using “easy_mesh_vtk” previously. We use 24 scans as the training set, 6 scans as the validation set, and keep 6 scans as the test set. For training and validation sets, each scan (e.g., Sample_01_d.vtp) and its flipped (e.g., Sample_01001_d.vtp) are augmented 20 times. All generated augmented intraoral scans (i.e., training and validation sets) will be saved in “./augmentation_vtk_data” folder.

In step1_augmentation.py, the variable “vtk_path” needs to define, which is the folder path of intraoral scans. Then you can implement this step by the following command.

python step1_augmentation.py

Step 2 Generate training and validation lists

In stpe2_get_list.py, please define variables “num_augmentation” and “num_samples” according to step1_augmentation.py. Since we use 24 of 30 scans as training data, the “train_size” is set to 0.8. You can implement this step by the following command.

python step2_get_list.py

Then, two CSV files (i.e., train_list.csv and val_list.csv) are generated in the same folder.

Step 3 Model training

In step3_training.py, please define variable “model_name” used for visdom environment and output filename. If your system doesn’t have visdom, please set variable “use_visdom” as False. In this work, the number of classes is 15, second molar to second molar (14 teeth) and gingiva. The number of features is 15, corresponding to cell vertices (9 elements), cell normal vector (3 elements), and the relative position (3 elements). To further augment our dataset, we select all tooth cells (i.e., triangle) and randomly select some gingival cells to form 6,000 cells inputs based on original scans in “./augmentation_vtk_data” during training. To prepare the input features and further augmented data as well as computing adjacent matrixes (AS and AL, refer to the original paper for detail) are carried out by Mesh_dataset.py. The network architecture of MeshSegNet is defined in meshsegnet.py.

You can start to train a MeshSegNet model by the following command.

python step3_training.py

We provide a trained model and its training curves in “./models” folder.

Optional:

If you would like to continue to train your previous model, you can modify step_3_1_continous_training.py accordingly and execute it by

python step3_1_continous_training.py

Step 4 Model testing

Once you obtain a well-trained model, you can use step4_test.py to test the model using your test dataset. Please define the path of the test dataset (variable “mesh_path”) and filename according to your data. To implement this step, by entering

python step4_test.py

The deployed results will be saved in “./test” and metrics (DSC, SEN, PPV) will be displayed.

Step 5 Predict unseen intraoral scans

step5_predict.py is very similar to step4_test.py. Once you set the data path and filename accordingly, it can predict the tooth labeling on unseen intraoral scans. The deployed results will be saved in “./test” as well. No metrics will be computed because the unseen scans do not have ground truth.

To implement this step, by entering

python step5_predict.py

Post-Processing

Our publication in IEEE Transactions on Medical Imaging (https://ieeexplore.ieee.org/abstract/document/8984309) mentioned the multi-label graph-cut method to refine the predicted results. We do not provide the related code in this repository. However, we implement this step using the MATLAB code gco-v3.0.zip at https://vision.cs.uwaterloo.ca/code/. If you need help in this part, please feel free to email me ([email protected]) or contact me via my github.

Citation

If you find our work useful in your research, please cite:

@article{Lian2020,
author = {Lian, C and Wang, L and Wu, T and Wang, F and Yap, P and Ko, C and Shen, D},
doi = {10.1109/TMI.2020.2971730},
issn = {1558-254X VO -},
journal = {IEEE Transactions on Medical Imaging},
keywords = {3D Intraoral Scanners,3D Shape Segmentation,Automated Tooth Labeling,Geometric Deep Learning,Orthodontic Treatment Planning},
pages = {1},
title = {{Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surfaces from 3D Intraoral Scanners}},
year = {2020}
}

@inproceedings{Lian2019, address = {Cham},
author = {Lian, Chunfeng and Wang, Li and Wu, Tai-Hsien and Liu, Mingxia and Dur{\'{a}}n, Francisca and Ko, Ching-Chang and Shen, Dinggang},
booktitle = {MICCAI 2019},
editor = {Shen, Dinggang and Liu, Tianming and Peters, Terry M and Staib, Lawrence H and Essert, Caroline and Zhou, Sean and Yap, Pew-Thian and Khan, Ali},
isbn = {978-3-030-32226-7},
pages = {837--845},
publisher = {Springer International Publishing},
title = {{MeshSNet: Deep Multi-scale Mesh Feature Learning for End-to-End Tooth Labeling on 3D Dental Surfaces BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019}},
year = {2019}
}

meshsegnet's People

Contributors

tai-hsien avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.