Point2Cell:
Efficient interactive biomedical image annotation. Instantiation to stain-free phase-contrast microscopy
-
Install anaconda-python (for more details please check https://docs.anaconda.com/anaconda/install/index.html)
-
Clone the repository (this will take some time)
git clone https://github.com/ounissimehdi/Point2Cell
- Change the working directory
cd Point2Cell
- Create the point2Cell conda environment with all the dependencies
conda env create -f environment.yml
Note that this project is tested on: Windows: 11, MacOS: BigSur and Lunix: Ubuntu 20.04.3 LTS.
with the last version of Pytorch ( pytorch 1.10.2:cuda 11.3 and cudnn 8.0 ) to this date.
It supports GPU π
- Activate the point2Cell conda environment
conda activate point2Cell
π You are all set!
It is highly recommended to read the paper before getting to use the Point2Cell annotation tool (please do not forget to cite the paper and the github π).
In this tutorial, a public dataset will be used, it is called "HeLa cells on a flat glass", it can be downloaded by using the following link (http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip).
Some basic information about this dataset :
-
Microscope: Zeiss LSM 510 Meta
-
Objective lens: Plan-Apochromat 63x/1.4 (oil)
-
Pixel size (microns): 0.19 x 0.19
-
Time step (min): 10
Make sure you are at the same folder level as the repository Point2Cell/
- Create the dataset folder and prepare the dataset
mkdir dataset
cd dataset
mkdir hela_cells_dataset
cd hela_cells_dataset
wget http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip
unzip DIC-C2DH-HeLa
rm DIC-C2DH-HeLa.zip
cd ../../
Note that the folder (dataset\hela_cells_dataset\DIC-C2DH-HeLa\01) with 84 images will be used for training/validation, and the folder (dataset\hela_cells_dataset\DIC-C2DH-HeLa\02) with another 84 images will be used for testing.
At the end of this step you should have:
tree
YOUR_PATH_TO_MAIN_FOLDER:.
ββββdataset
β ββββhela_cells_dataset
β ββββDIC-C2DH-HeLa
β ββββ01
β ββββ01_GT
β β ββββSEG
β β ββββTRA
β ββββ01_ST
β β ββββSEG
β ββββ02
β ββββ02_GT
β β ββββSEG
β β ββββTRA
β ββββ02_ST
β ββββSEG
ββββPoint2Cell
ββββannotation_interface
ββββdemo
ββββexperiments
ββββlabelme_utils
ββββtrain_models
β ββββhela_cells
β ββββruns
ββββunet
ββββutils
- Prepare HeLa cells dataset train/validation/test splits after data-augmentation
cd Point2Cell/
jupyter-notebook
a. Please open hela_data_preparation.ipynb inside the Point2Cell, your browser should open with a link similar to this one (http://localhost:8888/notebooks/hela_data_preparation.ipynb).
b. Execute all cells on the notebook, this notebook will apply data-augmentation to the training/validation data.
At the end of this step you should have:
tree
YOUR_PATH_TO_MAIN_FOLDER:.
ββββdataset
β ββββaugmented_hela_cells_dataset
β β ββββtest
β β ββββtrain
β β ββββval
β ββββhela_cells_dataset
β ββββDIC-C2DH-HeLa
β ββββ01
β ββββ01_GT
β β ββββSEG
β β ββββTRA
β ββββ01_ST
β β ββββSEG
β ββββ02
β ββββ02_GT
β β ββββSEG
β β ββββTRA
β ββββ02_ST
β ββββSEG
ββββPoint2Cell
ββββannotation_interface
ββββdemo
ββββexperiments
ββββlabelme_utils
ββββtrain_models
β ββββhela_cells
β ββββruns
ββββunet
ββββutils
Since tha data is ready!
-
All train/validation dataset images : 84 (672 images with data-augmentation)
- (80%) train images : 67 (536 images with data-augmentation)
- (20%) validation images : 17 (136 images with data-augmentation)
-
Test dataset contain 84 images and their ground truth masks.
Now we can start training the two U-Net models ('intra cellular density' and 'cell mask predictor').
- rain/validate then test the U-Net -cell mask predictor- model
cd Point2Cell/train_models/hela_cells/
conda activate point2Cell
python bn_mask_train_unet.py
- Train/validate then test the U-Net -intra cellular density- model
cd Point2Cell/train_models/hela_cells/
conda activate point2Cell
python density_mask_train_unet.py
- Check the training/validation and testing details inside (./Point2Cell/experiments/YOUR_EXP_NAME/logfile.log)
Note that this project support the TensorBoard, all records can be found inside (./Point2Cell/train_models/hela_cells/runs).
cd Point2Cell/train_models/hela_cells/
tensorboard --logdir runs
Happy visualization π!!
- Compute density maps and cell masks
cd Point2Cell/annotation_interface/
jupyter-notebook
a. Please open novel_data_prep.ipynb inside the (./Point2Cell/annotation_interface/) folder, your browser should open with a link similar to this one (http://localhost:8888/notebooks/novel_data_prep.ipynb).
b. Execute all cells on the notebook, this notebook will apply load the best models already trained and use them to compute the density maps and cell masks for the novel data (test dataset).
At the end of this step you should have:
tree
YOUR_PATH_TO_MAIN_FOLDER:.
C:.
ββββdataset
β ββββaugmented_hela_cells_dataset
β β ββββtest
β β ββββtrain
β β ββββval
β ββββhela_cells_dataset
β ββββDIC-C2DH-HeLa
β β ββββ01
β β ββββ01_GT
β β β ββββSEG
β β β ββββTRA
β β ββββ01_ST
β β β ββββSEG
β β ββββ02
β β ββββ02_GT
β β β ββββSEG
β β β ββββTRA
β β ββββ02_ST
β β ββββSEG
β ββββnovel_annotation
β ββββbinary_masks
β ββββdensity_masks
β ββββimages
ββββPoint2Cell
ββββannotation_interface
ββββdemo
ββββexperiments
ββββlabelme_utils
ββββtrain_models
β ββββhela_cells
β ββββruns
ββββunet
ββββutils
- Start the annotation of the novel dataset
cd Point2Cell/annotation_interface/
python annotation_tool.py
Enjoyπ!
The best model is based on the smallest validation loss.
data-split/metrics | Mean BCE Loss | DICE | PRECISION | RECALL | ACCURACY |
---|---|---|---|---|---|
Test | 0.151Β±0.0121 | 93.92%Β±0.5107 | 96.30%Β±0.4860 | 91.77%Β±1.3294 | 93.81%Β±0.4777 |
Validation | 0.1237Β±0.0026 | 94.88%Β±0.0606 | 93.82%Β±0.4604 | 96.02%Β±0.5218 | 94.85%Β±0.0798 |
Detailed results on the test dataset:
Cross-validation/metrics | Mean BCE Loss | DICE | PRECISION | RECALL | ACCURACY |
---|---|---|---|---|---|
FOLD 1 | 0.1644 | 93.40% | 97.14% | 90.00% | 93.37% |
FOLD 2 | 0.1414 | 94.36% | 96.05% | 92.90% | 94.23% |
FOLD 3 | 0.1348 | 94.58% | 96.09% | 93.18% | 94.43% |
FOLD 4 | 0.1493 | 94.01% | 95.73% | 92.42% | 93.86% |
FOLD 5 | 0.1651 | 93.29% | 96.51% | 90.35% | 93.19% |
Detailed results on the validation:
Cross-validation/metrics | Mean BCE Loss | DICE | PRECISION | RECALL | ACCURACY |
---|---|---|---|---|---|
FOLD 1 | 0.1260 | 94.86% | 94.29% | 95.48% | 94.79% |
FOLD 2 | 0.1219 | 94.93% | 93.49% | 96.50% | 94.86% |
FOLD 3 | 0.1252 | 94.93% | 93.26% | 96.72% | 94.83% |
FOLD 4 | 0.1263 | 94.77% | 93.64% | 95.98% | 94.80% |
FOLD 5 | 0.1195 | 94.91% | 94.44% | 95.43% | 95.01% |
data-split/metrics | Mean MSE |
---|---|
Test | 0.03164Β±0.0028 |
Validation | 0.0252Β±0.0004 |
Detailed results on the test dataset:
Cross-validation/metrics | Mean MSE |
---|---|
FOLD 1 | 0.0338 |
FOLD 2 | 0.0295 |
FOLD 3 | 0.0274 |
FOLD 4 | 0.0350 |
FOLD 5 | 0.0325 |
Detailed results on the validation:
Cross-validation/metrics | Mean MSE |
---|---|
FOLD 1 | 0.0256 |
FOLD 2 | 0.0258 |
FOLD 3 | 0.0252 |
FOLD 4 | 0.0249 |
FOLD 5 | 0.0246 |
Some examples annotated by LabelMe and Point2Cell respectively, compared to the ground truth (DICE coefficient percentage).
Annotation speed comparison between Point2Cell and LabelMe:
Time per image | Average Dice coefficient | |
---|---|---|
Point2Cell (our tool) | 14.6 sec (std 1.35sec) | 94.97% (std 0.88%) |
LabelMe | 96.1 sec (std 8.03sec) | 91.30% (std 0.90%) |
Detailed results:
LabelMe | Point2Cell | |||
---|---|---|---|---|
images/metrics | Time (sec) | DICE | Time (sec) | DICE |
img_0 | 108 | 92.05% | 13 | 95.96% |
img_1 | 95 | 92.81% | 13 | 95.68% |
img_2 | 108 | 91.52% | 15 | 95.14% |
img_3 | 87 | 90.2% | 14 | 94.31% |
img_4 | 85 | 91.32% | 15 | 95.59% |
img_5 | 88 | 90.69% | 14 | 93.64% |
img_6 | 102 | 91.66% | 16 | 94.08% |
img_7 | 99 | 89.7% | 17 | 93.71% |
img_8 | 90 | 92.2% | 16 | 95.76% |
img_9 | 99 | 90.94% | 13 | 95.87% |
email address : [email protected]
@misc{Github,
author={Mehdi Ounissi and Daniel Racoceanu},
title={Point2Cell: efficient learning for interactive biomedical image annotation. Instantiation to stain-free phase-contrast microscopy},
year={2022},
url={https://github.com/ounissimehdi/Point2Cell},
}