GithubHelp home page GithubHelp logo

templeblock / transductive-vos.pytorch Goto Github PK

View Code? Open in Web Editor NEW

This project forked from microsoft/transductive-vos.pytorch

0.0 1.0 0.0 420 KB

a transductive approach for video object segmentation

Python 100.00%

transductive-vos.pytorch's Introduction

A Transductive Approach for Video Object Segmentation

This repo contains the pytorch implementation for the CVPR 2020 paper A Transductive Approach for Video Object Segmentation.

Pretrained Models and Results

We provide three pretrained models of ResNet50. They are trained from DAVIS 17 training set, combined DAVIS 17 training and validation set and YouTube-VOS training set.

Our pre-computed results can be downloaded here.

Our results on DAVIS17 and YouTube-VOS:

Dataset J F
DAVIS17 validation 69.9 74.7
DAVIS17 test-dev 58.8 67.4
YouTube-VOS (seen) 67.1 69.4
YouTube-VOS (unseen) 63.0 71.6

Usage

  • Install python3, pytorch >= 0.4, and PIL package.

  • Clone this repo:

    git clone https://github.com/microsoft/transductive-vos.pytorch
  • Prepare DAVIS 17 train-val dataset:

    # first download the dataset
    cd /path-to-data-directory/
    wget https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-trainval-480p.zip
    # unzip
    unzip DAVIS-2017-trainval-480p.zip
    # split train-val dataset
    python /VOS-Baseline/dataset/split_trainval.py -i ./DAVIS
    # clean up
    rm -rf ./DAVIS

    Now, your data directory should be structured like this:

    .
    |-- DAVIS_train
        |-- JPEGImages/480p/
            |-- bear
            |-- ...
        |-- Annotations/480p/
    |-- DAVIS_val
        |-- JPEGImages/480p/
            |-- bike-packing
            |-- ...
        |-- Annotations/480p/ 
    
  • Training on DAVIS training set:

    python -m torch.distributed.launch --master_port 12347 --nproc_per_node=4 main.py --data /path-to-your-davis-directory/

    All the training parameters are set to our best setting to reproduce the ResNet50 model as default. In this setting you need to have 4 GPUs with 16 GB CUDA memory each. Feel free to contact the author on parameter settings if you want to train on a single or more GPUs.

    If you want to change some parameters, you can see comments in main.py or

    python main.py -h
  • Inference on DAVIS validation set, 1 GPU with 12 GB CUDA memory is needed:

    python inference.py -r /path-to-pretrained-model -s /path-to-save-predictions

    Same as above, all the inference parameters are set to our best setting on DAVIS validation set as default, which is able to reproduce our result with a J-mean of 0.699. The saved predictions can be directly evaluated by DAVIS evaluation code.

Further Improvements

This approach is simple with clean implementations, if you add few tiny tricks, the performance will be furhter improved. For exmaple,

  • If performing epoch test, i.e., selecting the best-performing epoch, you can further get ~1.5 points absolute performance improvements on DAVIS17 dataset.
  • Pretraining the model on other image datasets with mask annotation, such as semantic segmentation and salient object detection, may bring further improvements.
  • ... ...

Contact

For any questions, please feel free to reach

Yizhuo Zhang: [email protected]
Zhirong Wu: [email protected]

Citations

@inproceedings{zhang2020a,
  title={A Transductive Approach for Video Object Segmentation}
  author={Zhang, Yizhuo and Wu, Zhirong and Peng, Houwen and Lin, Stephen},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

transductive-vos.pytorch's People

Contributors

penghouwen avatar zhirongw avatar

Watchers

paper2code - bot avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.