- Install PyTorch
pip3 install torch # if you haven't installed it
- Install detectron2
Refer to: https://detectron2.readthedocs.io/en/latest/tutorials/install.html Find the instruction for installing dectectron2 based on your Cuda and PyTorch version.
# See your PyTorch version python3 -c "import torch; print(torch.__version__)" # See your Cuda version nvcc --version
# E.g. if the Cuda version and the PyTorch version of current device is 11.1 and 1.10.0 respectively. # then you can: python3 -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html
- Install other packages
pip3 install torchvision pip3 install opencv-python
-
Download the dataset.zip (72 MB), unzip it, and move it in the repo root path. The structure will be:
Nuclei-Instance-Segmentation/ |___dataset/ |___test/ | |___TCGA-50-5931-01Z-00-DX1.png | |___TCGA-A7-A13E-01Z-00-DX1.png | ...... (other 4 images, 6 test images in total) |___train/ | |___TCGA-18-5592-01Z-00-DX1 (first data's directory) | | |___images | | | |___TCGA-18-5592-01Z-00-DX1.png (this data's image) | | |___masks | | |___mask_0001.png | | |___mask_0002.png | | ...... (all masks of instances) | |___TCGA-21-5784-01Z-00-DX1 (second data's directory) | ...... (other data, 24 in total) |___test_img_ids.json ...... (other dir)
-
Run
prepare.py
to automatically produce train/valid/test dataset.
Default: 24 train, 0 valid. If you want to change the number of valid images, modify the var inprepare.py
.# Default setting N_VALID = 0 # 0 valid MAX_IMG_ID = 23 # 24 data in total, 0 for valid, 24 for train, so the max img id is 23 (0~23)
# If you want 2 valid N_VALID = 2 # 0 valid MAX_IMG_ID = 21 # 24 data in total, 2 for valid, 22 for train, so the max img id is 21 (0~21)
python3 prepare.py
-
Download my pretrained model: link
This was trained 37999 iteration from the pretrained model of cascade_mask_rcnn_R_50_FPN_3x on detectron2.pip3 install gdown gdown --id 1to1K6_dnSVnY9-Qv3CJz_ITTaPqaxA1m
-
Make augmentation data: follow the instruction of
make_augmentation_data.ipynb
and run all the cells. Augmentation data will be produced and saved in "aug/". (If you want to inference only, this part can be skipped.)
Train:
# 1. Modify the settings in main.py (e.g. from which pretrained model)
# 2. Run it.
python3 main.py
Track your training using Tensorboard:
# Default dir of training is "output/"
tensorboard --logdir output
Inference test images and make a submit file.
# You can modify the setting in inference.py
python3 inference.py