GithubHelp home page GithubHelp logo

researchmm / sttn Goto Github PK

View Code? Open in Web Editor NEW
434.0 19.0 73.0 7.63 MB

[ECCV'2020] STTN: Learning Joint Spatial-Temporal Transformations for Video Inpainting

Home Page: https://arxiv.org/abs/2007.10247

Python 8.48% Jupyter Notebook 91.52%
video-inpainting completing-videos transformer spatial-temporal

sttn's Introduction

STTN for Video Inpainting

teaser

Learning Joint Spatial-Temporal Transformations for Video Inpainting

Yanhong Zeng, Jianlong Fu, and Hongyang Chao.
In ECCV 2020.

Citation

If any part of our paper and repository is helpful to your work, please generously cite with:

@inproceedings{yan2020sttn,
  author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang,
  title = {Learning Joint Spatial-Temporal Transformations for Video Inpainting},
  booktitle = {The Proceedings of the European Conference on Computer Vision (ECCV)},
  year = {2020}
}

Introduction

High-quality video inpainting that completes missing regions in video frames is a promising yet challenging task.

In this paper, we propose to learn a joint Spatial-Temporal Transformer Network (STTN) for video inpainting. Specifically, we simultaneously fill missing regions in all input frames by the proposed multi-scale patch-based attention modules. STTN is optimized by a spatial-temporal adversarial loss.

To show the superiority of the proposed model, we conduct both quantitative and qualitative evaluations by using standard stationary masks and more realistic moving object masks.

STTN

Installation

Clone this repo.

git clone [email protected]:researchmm/STTN.git
cd STTN/

We build our project based on Pytorch and Python. For the full set of required Python packages, we suggest create a Conda environment from the provided YAML, e.g.

conda env create -f environment.yml 
conda activate sttn

Completing Videos Using Pretrained Model

The result videos can be generated using pretrained models. For your reference, we provide a model pretrained on Youtube-VOS(Google Drive Folder).

  1. Download the pretrained models from the Google Drive Folder, save it in checkpoints/.

  2. Complete videos using the pretrained model. For example,

python test.py --video examples/schoolgirls_orig.mp4 --mask examples/schoolgirls  --ckpt checkpoints/sttn.pth 

The outputs videos are saved at examples/.

Dataset Preparation

We provide dataset split in datasets/.

Preparing Youtube-VOS (2018) Dataset. The dataset can be downloaded from here. In particular, we follow the standard train/validation/test split (3,471/474/508). The dataset should be arranged in the same directory structure as

datasets
    |- youtube-vos
        |- JPEGImages
           |- <video_id>.zip
           |- <video_id>.zip
        |- test.json 
        |- train.json 

Preparing DAVIS (2018) Dataset. The dataset can be downloaded from here. In particular, there are 90 videos with densely-annotated object masks and 60 videos without annotations. The dataset should be arranged in the same directory structure as

datasets
    |- davis
        |- JPEGImages
          |- cows.zip
          |- goat.zip
        |- Annoatations
          |- cows.zip
          |- goat.zip
        |- test.json 
        |- train.json 

Training New Models

Once the dataset is ready, new models can be trained with the following commands. For example,

python train.py --config configs/youtube-vos.json --model sttn 

Testing

Testing is similar to Completing Videos Using Pretrained Model.

python test.py --video examples/schoolgirls_orig.mp4 --mask examples/schoolgirls  --ckpt checkpoints/sttn.pth 

The outputs videos are saved at examples/.

Visualization

We provide an example of visualization attention maps in visualization.ipynb.

Training Monitoring

We provide traning monitoring on losses by running:

tensorboard --logdir release_mode                                                    

Contact

If you have any questions or suggestions about this paper, feel free to contact me ([email protected]).

sttn's People

Contributors

zengyh1900 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sttn's Issues

Missing dis.pth and opt.pth files for fine-tuning

Hi,

I am trying to fine-tune your youtube-vos checkpoint on a custom dataset. I tried to resume training but the .pth file only contains the netG data. As a result of loading only the generator and initialising a new discriminator the training collapses instantly. I believe I require netD, optimD, and optimG from your training runs. Could you provide these?

Thank you very much for your time.

Trained model performance

Hello

I tried to make a model with the same performance as the model you shared.

As shown in the paper, New model was trained using youtube-vos data. (500000 iter)

after training, The model was trained using DAVIS data. (500000 iter)

Nevertheless, The trained model performs worse than the pre-shared model.

Why is the performance different? Are there any additional tunings that are not listed in the paper?

If there were any additional tunings, I'd like to know.

Best regards,
Son

Youtube-VOS

hi there,

May I ask something about your data preparation? The structure of youtube-vos dataset is not like what you described in your project. There are folders for different video clips and no zip files. Do we need to compress each folders into zip files? Can you share the data preparation code for this? Thanks.

how to change the output video size

I tried to change the value of w and h in test.py file,but it returns me a RuntimeError.
image
can anyone explain why is it happening?
Thank you

Could you share other pretrained model?

Hello, thanks for sharing your code.

I tried to fine-tune from pretrained model that you provide.
But I also need netD, optimG and optimD.

Can I get other models as well?

output resolution?

Thank you for making this code available, it works great, however it downsamples the input to 432x240 as shown here:

w, h = 432, 240
ref_length = 20
neighbor_stride = 5
default_fps = 24

This obviously downsamples the output as well, resulting in a lower resolution result.

Can I run this on full size (HD, 4kUHD) videos and output the same resolution as input?

Can you also please explain what ref_length and neighbor_stride do, and when to change these values?

Thank you!

DAVIS2018 does not exist

Hello,

I am looking for the Davis-2018 from your link but all the datasets there do not have zip files. Currently, you code does not work for the Davis datasets.

Is it because they remove the Davis-2018 dataset, or is it because the link was incorrect? Either way, can you please check it.

Thanks,
Anh

sample_length = 5?

i want ask the question about sample_length.
in your paper ,the sample_length is set 10,but in your code the sample_length is set 10 in the file "youtube-vos.json".

can you tell me what should i understand it ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.