UniInst: Towards End-to-End Instance Segmentation with Unique Representation UniInst
Name | inf. time | mask AP | download |
---|---|---|---|
UniInst_MS_R_50_3x | 20 FPS | 38.4 | model |
UniInst_MS_R_50_6x | 20 FPS | 38.9 | model |
UniInst_MS_R_101_3x | 16 FPS | 39.7 | model |
UniInst_MS_R_101_6x | 16 FPS | 40.2 | model |
For more models and information, please refer to CondInst README.md.
Note that:
- Inference time for all projects is measured on a NVIDIA V100 with batch size 1.
- APs are evaluated on COCO2017 test split unless specified.
First install Detectron2 following the official guide: INSTALL.md.
Then build UniInst with:
python3 setup.py build develop
Some projects may require special setup, please follow their own README.md
in configs.
- Pick a model and its config file, for example,
UniInst_R_50_3x.yaml
. - Download the model
- Run the demo with
python demo/demo.py \
--config-file configs/UniInst/UniInst_MS_R_50_3x.yaml \
--input input1.jpg input2.jpg \
--opts MODEL.WEIGHTS UniInst_R_50_3x.pth
To train a model with "train_net.py", first setup the corresponding datasets following datasets/README.md, then run:
OMP_NUM_THREADS=1 python3 tools/train_net.py \
--config-file configs/UniInst/UniInst_MS_R_50_3x.yaml \
--num-gpus 8 \
OUTPUT_DIR training_dir/UniInst_R_50_3x
To evaluate the model after training, run:
OMP_NUM_THREADS=1 python3 tools/train_net.py \
--config-file configs/UniInst/UniInst_MS_R_50_3x.yaml \
--eval-only \
--num-gpus 8 \
OUTPUT_DIR training_dir/UniInst_R_50_3x \
MODEL.WEIGHTS training_dir/UniInst_R_50_3x/model_final.pth
Note that:
- The configs are made for 8-GPU training. To train on another number of GPUs, change the
--num-gpus
. - If you want to measure the inference time, please change
--num-gpus
to 1. - We set
OMP_NUM_THREADS=1
by default, which achieves the best speed on our machines, please change it as needed. - This quick start is made for FCOS. If you are using other projects, please check the projects' own
README.md
in configs.
Note that ourwork is based on the AdelaiDet. If you use our code in your reaserch or works, please also cite AdelaiDet.
Please use the following BibTeX entries:
@misc{tian2019adelaidet,
author = {Tian, Zhi and Chen, Hao and Wang, Xinlong and Liu, Yuliang and Shen, Chunhua},
title = {{AdelaiDet}: A Toolbox for Instance-level Recognition Tasks},
howpublished = {\url{https://git.io/adelaidet}},
year = {2019}
}
#UniInst arxiv#
To be continued.