GithubHelp home page GithubHelp logo

baowenbo / dain Goto Github PK

View Code? Open in Web Editor NEW
8.2K 188.0 843.0 288 KB

Depth-Aware Video Frame Interpolation (CVPR 2019)

Home Page: https://sites.google.com/view/wenbobao/dain

License: MIT License

Python 58.89% Shell 0.33% C++ 8.33% Cuda 30.09% Jupyter Notebook 2.35%

dain's Introduction

DAIN (Depth-Aware Video Frame Interpolation)

Project | Paper

Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang

IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CVPR 2019

This work is developed based on our TPAMI work MEMC-Net, where we propose the adaptive warping layer. Please also consider referring to it.

Table of Contents

  1. Introduction
  2. Citation
  3. Requirements and Dependencies
  4. Installation
  5. Testing Pre-trained Models
  6. Downloading Results
  7. Slow-motion Generation
  8. Training New Models
  9. Google Colab Demo

Introduction

We propose the Depth-Aware video frame INterpolation (DAIN) model to explicitly detect the occlusion by exploring the depth cue. We develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. Our method achieves state-of-the-art performance on the Middlebury dataset. We provide videos here.

Citation

If you find the code and datasets useful in your research, please cite:

@inproceedings{DAIN,
    author    = {Bao, Wenbo and Lai, Wei-Sheng and Ma, Chao and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan}, 
    title     = {Depth-Aware Video Frame Interpolation}, 
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
    year      = {2019}
}
@article{MEMC-Net,
     title={MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement},
     author={Bao, Wenbo and Lai, Wei-Sheng, and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan},
     journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
     doi={10.1109/TPAMI.2019.2941941},
     year={2018}
}

Requirements and Dependencies

  • Ubuntu (We test with Ubuntu = 16.04.5 LTS)
  • Python (We test with Python = 3.6.8 in Anaconda3 = 4.1.1)
  • Cuda & Cudnn (We test with Cuda = 9.0 and Cudnn = 7.0)
  • PyTorch (The customized depth-aware flow projection and other layers require ATen API in PyTorch = 1.0.0)
  • GCC (Compiling PyTorch 1.0.0 extension files (.c/.cu) requires gcc = 4.9.1 and nvcc = 9.0 compilers)
  • NVIDIA GPU (We use Titan X (Pascal) with compute = 6.1, but we support compute_50/52/60/61 devices, should you have devices with higher compute capability, please revise this)

Installation

Download repository:

$ git clone https://github.com/baowenbo/DAIN.git

Before building Pytorch extensions, be sure you have pytorch >= 1.0.0:

$ python -c "import torch; print(torch.__version__)"

Generate our PyTorch extensions:

$ cd DAIN
$ cd my_package 
$ ./build.sh

Generate the Correlation package required by PWCNet:

$ cd ../PWCNet/correlation_package_pytorch1_0
$ ./build.sh

Testing Pre-trained Models

Make model weights dir and Middlebury dataset dir:

$ cd DAIN
$ mkdir model_weights
$ mkdir MiddleBurySet

Download pretrained models,

$ cd model_weights
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/best.pth

and Middlebury dataset:

$ cd ../MiddleBurySet
$ wget http://vision.middlebury.edu/flow/data/comp/zip/other-color-allframes.zip
$ unzip other-color-allframes.zip
$ wget http://vision.middlebury.edu/flow/data/comp/zip/other-gt-interp.zip
$ unzip other-gt-interp.zip
$ cd ..

preinstallations:

$ cd PWCNet/correlation_package_pytorch1_0
$ sh build.sh
$ cd ../my_package
$ sh build.sh
$ cd ..

We are good to go by:

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py

The interpolated results are under MiddleBurySet/other-result-author/[random number]/, where the random number is used to distinguish different runnings.

Downloading Results

Our DAIN model achieves the state-of-the-art performance on the UCF101, Vimeo90K, and Middlebury (eval and other). Download our interpolated results with:

$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/UCF101_DAIN.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/Vimeo90K_interp_DAIN.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/Middlebury_eval_DAIN.zip
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/Middlebury_other_DAIN.zip

Slow-motion Generation

Our model is fully capable of generating slow-motion effect with minor modification on the network architecture. Run the following code by specifying time_step = 0.25 to generate x4 slow-motion effect:

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.25

or set time_step to 0.125 or 0.1 as follows

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.125
$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.1

to generate x8 and x10 slow-motion respectively. Or if you would like to have x100 slow-motion for a little fun.

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury_slowmotion.py --netName DAIN_slowmotion --time_step 0.01

You may also want to create gif animations by:

$ cd MiddleBurySet/other-result-author/[random number]/Beanbags
$ convert -delay 1 *.png -loop 0 Beanbags.gif //1*10ms delay 

Have fun and enjoy yourself!

Training New Models

Download the Vimeo90K triplet dataset for video frame interpolation task, also see here by Xue et al., IJCV19.

$ cd DAIN
$ mkdir /path/to/your/dataset & cd /path/to/your/dataset 
$ wget http://data.csail.mit.edu/tofu/dataset/vimeo_triplet.zip
$ unzip vimeo_triplet.zip
$ rm vimeo_triplet.zip

Download the pretrained MegaDepth and PWCNet models

$ cd MegaDepth/checkpoints/test_local
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/best_generalization_net_G.pth
$ cd ../../../PWCNet
$ wget http://vllab1.ucmerced.edu/~wenbobao/DAIN/pwc_net.pth.tar
$ cd  ..

Run the training script:

$ CUDA_VISIBLE_DEVICES=0 python train.py --datasetPath /path/to/your/dataset --batch_size 1 --save_which 1 --lr 0.0005 --rectify_lr 0.0005 --flow_lr_coe 0.01 --occ_lr_coe 0.0 --filter_lr_coe 1.0 --ctx_lr_coe 1.0 --alpha 0.0 1.0 --patience 4 --factor 0.2

The optimized models will be saved to the model_weights/[random number] directory, where [random number] is generated for different runs.

Replace the pre-trained model_weights/best.pth model with the newly trained model_weights/[random number]/best.pth model. Then test the new model by executing:

$ CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py

Google Colab Demo

This is a modification of DAIN that allows the usage of Google Colab and is able to do a full demo interpolation from a source video to a target video.

Original Notebook File by btahir can be found here.

To use the Colab, follow these steps:

  • Download the Colab_DAIN.ipynb file (link).
  • Visit Google Colaboratory (link)
  • Select the "Upload" option, and upload the .ipynb file
  • Start running the cells one by one, following the instructions.

Colab file authors: Styler00Dollar and Alpha.

Contact

Wenbo Bao; Wei-Sheng (Jason) Lai

License

See MIT License

dain's People

Contributors

alphagit avatar baowenbo avatar kmbriedis avatar productiveasparagus56 avatar rf-nelson avatar rivermont avatar styler00dollar avatar zhhezhhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dain's Issues

About warping

I want to use the obtained Projected Flows to warp a three-channel image without considering the kernel estimation. What should I do?

Pytorch 1.2.0 with Python 3.6.8

Hello,

I am a bit unexperienced with Linux/Ubuntu, so this might be entirely me misunderstanding the requirements.
But as far as I understand, Pytorch 1.2.0 requires a Python version above 3.7? And also there is no Pytorch 1.2.0 version for CUDA 9.0?
Are the versions needed to be exactly what you wrote, or just at least that?
Again, this is probably me being a bit slow, but I would be grateful for an answer in any case.

HD testset

Hi, nice work!
Can you share the GT of HD dataset?
I try look for it, unfortunately I cannot find it.
Thanks!

CUDA related errors when compiling from source [with output log]

when executing sudo ./build.sh i get this output
pytorch 1.4.0
cuda-10.2.89-3
cudnn-7.6.5.32-3
on 5.4.24-1-MANJARO
GPU: NVIDIA GeForce GTX 1070

Need pytorch>=1.0.0
./build.sh: line 4: activate: No such file or directory
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating mindepthflowprojection_cuda.egg-info
writing mindepthflowprojection_cuda.egg-info/PKG-INFO
writing dependency_links to mindepthflowprojection_cuda.egg-info/dependency_links.txt
writing top-level names to mindepthflowprojection_cuda.egg-info/top_level.txt
writing manifest file 'mindepthflowprojection_cuda.egg-info/SOURCES.txt'
reading manifest file 'mindepthflowprojection_cuda.egg-info/SOURCES.txt'
writing manifest file 'mindepthflowprojection_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'mindepthflowprojection_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c mindepthflowprojection_cuda.cc -o build/temp.linux-x86_64-3.8/mindepthflowprojection_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=mindepthflowprojection_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from mindepthflowprojection_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating flowprojection_cuda.egg-info
writing flowprojection_cuda.egg-info/PKG-INFO
writing dependency_links to flowprojection_cuda.egg-info/dependency_links.txt
writing top-level names to flowprojection_cuda.egg-info/top_level.txt
writing manifest file 'flowprojection_cuda.egg-info/SOURCES.txt'
reading manifest file 'flowprojection_cuda.egg-info/SOURCES.txt'
writing manifest file 'flowprojection_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'flowprojection_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c flowprojection_cuda.cc -o build/temp.linux-x86_64-3.8/flowprojection_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=flowprojection_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from flowprojection_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating separableconv_cuda.egg-info
writing separableconv_cuda.egg-info/PKG-INFO
writing dependency_links to separableconv_cuda.egg-info/dependency_links.txt
writing top-level names to separableconv_cuda.egg-info/top_level.txt
writing manifest file 'separableconv_cuda.egg-info/SOURCES.txt'
reading manifest file 'separableconv_cuda.egg-info/SOURCES.txt'
writing manifest file 'separableconv_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'separableconv_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c separableconv_cuda.cc -o build/temp.linux-x86_64-3.8/separableconv_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=separableconv_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from separableconv_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating interpolationch_cuda.egg-info
writing interpolationch_cuda.egg-info/PKG-INFO
writing dependency_links to interpolationch_cuda.egg-info/dependency_links.txt
writing top-level names to interpolationch_cuda.egg-info/top_level.txt
writing manifest file 'interpolationch_cuda.egg-info/SOURCES.txt'
reading manifest file 'interpolationch_cuda.egg-info/SOURCES.txt'
writing manifest file 'interpolationch_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'interpolationch_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c interpolationch_cuda.cc -o build/temp.linux-x86_64-3.8/interpolationch_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=interpolationch_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from interpolationch_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating depthflowprojection_cuda.egg-info
writing depthflowprojection_cuda.egg-info/PKG-INFO
writing dependency_links to depthflowprojection_cuda.egg-info/dependency_links.txt
writing top-level names to depthflowprojection_cuda.egg-info/top_level.txt
writing manifest file 'depthflowprojection_cuda.egg-info/SOURCES.txt'
reading manifest file 'depthflowprojection_cuda.egg-info/SOURCES.txt'
writing manifest file 'depthflowprojection_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'depthflowprojection_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c depthflowprojection_cuda.cc -o build/temp.linux-x86_64-3.8/depthflowprojection_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=depthflowprojection_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from depthflowprojection_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating interpolation_cuda.egg-info
writing interpolation_cuda.egg-info/PKG-INFO
writing dependency_links to interpolation_cuda.egg-info/dependency_links.txt
writing top-level names to interpolation_cuda.egg-info/top_level.txt
writing manifest file 'interpolation_cuda.egg-info/SOURCES.txt'
reading manifest file 'interpolation_cuda.egg-info/SOURCES.txt'
writing manifest file 'interpolation_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'interpolation_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c interpolation_cuda.cc -o build/temp.linux-x86_64-3.8/interpolation_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=interpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from interpolation_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating separableconvflow_cuda.egg-info
writing separableconvflow_cuda.egg-info/PKG-INFO
writing dependency_links to separableconvflow_cuda.egg-info/dependency_links.txt
writing top-level names to separableconvflow_cuda.egg-info/top_level.txt
writing manifest file 'separableconvflow_cuda.egg-info/SOURCES.txt'
reading manifest file 'separableconvflow_cuda.egg-info/SOURCES.txt'
writing manifest file 'separableconvflow_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'separableconvflow_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c separableconvflow_cuda.cc -o build/temp.linux-x86_64-3.8/separableconvflow_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=separableconvflow_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from separableconvflow_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating filterinterpolation_cuda.egg-info
writing filterinterpolation_cuda.egg-info/PKG-INFO
writing dependency_links to filterinterpolation_cuda.egg-info/dependency_links.txt
writing top-level names to filterinterpolation_cuda.egg-info/top_level.txt
writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
reading manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'filterinterpolation_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c filterinterpolation_cuda.cc -o build/temp.linux-x86_64-3.8/filterinterpolation_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=filterinterpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
                 from /usr/include/ATen/cuda/CUDAContext.h:11,
                 from filterinterpolation_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
    4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

Issues running from source on Linux

I am attempting to run DAIN on Linux, and the only way I could find was by cloning this git repository and compiling it, however, not a single line on the build.sh file seems to work for some reason, is there something I might be missing or doing wrong?

Google Colab CUDA error

Tried to get this running on Colab, but I'm running into cuda issues...
Link to notebook.

Any ideas on how to fix this? Would be great to just have a Colab to experiment!

error in correlation_forward_cuda_kernel: no kernel image is available for execution on the device
Warning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) (THPFunction_do_forward at /pytorch/torch/csrc/autograd/python_function.cpp:622)

Traceback (most recent call last):
File "demo_MiddleBury.py", line 131, in
y_s,offset,filter = model(torch.stack((X0, X1),dim = 0))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/networks/DAIN.py", line 149, in forward
self.forward_flownets(self.flownets, cur_offset_input, time_offsets=time_offsets),
File "/content/DAIN/networks/DAIN.py", line 205, in forward_flownets
temp = model(input) # this is a single direction motion results, but not a bidirectional one
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/PWCNet/PWCNet.py", line 220, in forward
corr6 = self.corr(c16, c26)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 59, in forward
result = CorrelationFunction(self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)(input1, input2)
File "/content/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 27, in forward
self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)

RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fe6bc85e193 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: correlation_forward_cuda(at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, int, int, int, int, int, int) + 0x628 (0x7fe6b8f59ad8 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x1bd3a (0x7fe6b8f69d3a in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: + 0x18880 (0x7fe6b8f66880 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #4: python3() [0x50ac25]

关于深度图的问题

你好,深度图确实有用吗?我看消融实验加了深度图只高了0.06分?能提供 得分 vs. 迭代 曲线吗?想入坑这个方向,发现这个方向的问题应该是运动估计的问题,你有什么建议吗?

Weird artifacts after vimeo90k training

I'm getting weird colored artifacts after running the sample training with vimeo90k triplets dataset. All the arguments given to the train.py were exactly the same of the README.md. This does not happen with the downloaded pretrained model checkpoint. Any idea of what could be causing this problem?

0001

Out of memory Issues

I noticed that DAIN doesn't handle HD video sizes [1280x720] with a 8gb of GPU memory. Running on a GTX 1070. Is there anyway for me to reduce the memory usage on the GPU side?

About the PSNR

I have a question when I read your paper.In your paper the DVF method's PSNR is 34.12. But in the DVF original paper the PSNR is 35.8. So, what's your evaluate method?

About PWCNet

Can you share the pre-trained PWCNet model. Thanks

error: FileNotFoundError: [Errno 2] No such file or directory: 'PWCNet/pwc_net.pth.tar'

Possible Enhancements: Super Resolution and Denoising

Can DAIN directly process the raw video?

The demo script given in the repo require two frames as input, however, can the DAIN directly choose a video as input and output the corresponding synthesis video?
or does anyone has written the script to do so?

"undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs

Hi.Thanks for the wonderful work!

I copied the virtual environment using “environment.yaml”which is provided.
And I execute build.sh successfully.

Now facing below the issue:
undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs

Any suggestion?

Integrating DAIN with SVP

Hi all,

I've seen your videos on youtube and the results are really amazing. Are you guys aware of Smooth Video Project?: https://www.svp-team.com/

It would be amazing if we could somehow integrate DAIN with Smooth Video Project. What do you guys think?

my_package problem

Hi, I would like to adapt your implementation of separable convolution in other applications. However, it seems that the implementation could not pass the gradcheck of PyTorch. Have you ever tried it?

Segmentation fault (core dumped)

UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Segmentation fault (core dumped)

Why?

question about supplementary materials

In the paper you have mentioned that details of adaptive warping layers and the configuration of the kernel estimation network were provided in the supplementary materials, but I didn't find where the supplementary materials are. Would you mind providing us ones? Thanks.

Problems of padding and cropping

Hi.Thanks for the wonderful work!

I ran the test code on my 1280*720 video but I noticed that the edges of the image were moving, and it looked like a few pixels had been vertically compressed (720 is not divisible by 128). I see that your code has a padding operation for the input image and a clipping operation for the output image, but I don't know what caused this boundary drift problem.

Can you help me? Thanks!

training strategy

can you share train.py code? i don't understand training strategy. thanks

Questions about the input for frame synthesis network

As described in figure 3 from the CVPR paper, the input of frame synthesis network consists of five components, including raw interpolation kernels, projected flows, warped depth maps, warped frames and warped context features. However, in line 177 to 181 from DIAN_slowmotion.py , the input for rectifyNet seems not as same as described:

rectify_input = torch.cat((cur_output_temp,ref0,ref2, cur_offset_output[0],cur_offset_output[1], cur_filter_output[0],cur_filter_output[1], ctx0,ctx2 ),dim =1)

It seems that the actual input for the frame synthesis network did not include the warped depth maps while used a blended result from warped frames alternatively.

So which one should be the correct way for the proposed method? Would you pleased give a numerical analysis for such different settings?

Environment

could you give the environment such as export the environment by conda env export > environment.yaml,thank you

Segmentation fault

When I run

CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
Dimetrodon
/home/zhenghe/anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.UpsamplingNearest2d is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/home/zhenghe/anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/home/zhenghe/anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/nn/functional.py:2423: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
Segmentation fault

RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)

I get this error:

RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fa66bef6193 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: correlation_forward_cuda(at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, int, int, int, int, int, int) + 0x5ea (0x7fa6694c0d4a in /opt/conda/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x1e9b4 (0x7fa6694d19b4 in /opt/conda/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: + 0x1b870 (0x7fa6694ce870 in /opt/conda/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)

frame #11: THPFunction_do_forward(THPFunction*, _object*) + 0x4ac (0x7fa6b72d0fec in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)

when running
CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py

Anybody else get this error?
Also, does anybody know how to fix this issue?

Interpolate entire sequence

when I try to interpolate a sequence (1 minute of video) it only interpolates the first frame and nothing else.
example: if the interpolation is at 0.25 it only interpolates the first 4 frames and the process stops.

what's the problem in this?

What is the exact network size for Kernel Estimation Module?

I was curious about the number of paramters for the 1d kernel estimation network. In the paper, it states that this sub module has 5.51M of paramters. However, when I look at the kernel estimation network structure in the supplemental material, I believe it is much larger than 5.51M. In the decoder, one layer with 4x4x512x512 already has around 4.19M parameters.

image

Fix BibTeX citation on website

The paper citation on the Google Sites page reads (link),

booktitle = {IEEE Conferene on Computer Vision and Pattern Recognition},

"Conferene" should be "Conference". The same typo was in README.md but is fixed in #32.

Kernel Panic using Cuda 10.2

Hello - firstly, thanks for this and your great documentation. Much appreciated.

Im using Ubuntu 18.0.4 LTS, Cuda 10.2, Nvidia 4.40 drivers and a single Titan X

Ive followed the readme, installed the dependencies in a virtual envs, compiled the extensions, and am able to run the demo - however, after a few seconds the demo crashes and kernal panics the entire system.

I've attempted to edit both extension 's NVCC flags, as per the helpful note in the documentation, but to no avail.

    '-gencode', 'arch=compute_52,code=sm_52',
    '-gencode', 'arch=compute_60,code=sm_60',
    '-gencode', 'arch=compute_61,code=sm_61',
    '-gencode', 'arch=compute_70,code=sm_70',
    '-gencode', 'arch=compute_75,code=sm_75',
    '-gencode', 'arch=compute_75,code=compute_75',

However, that also kernel panics the machine.

I am able to monitor GPU memory usage right before the crash and am able to see pytorch allocating GPU memory, but It appears to go to the max, then the system dies.

Are there other specific hardware requirements for this code base?

Import Error when importing correlation_cuda

I get the following Error when importing correlation_cuda in python 3.6 and python 2.7 with torch 1.4.0 and 1.1.0:

/usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

the other modules from the my_package directories work fine.

How to train on a self-constructed dataset

I want to train DAIN on a self-constucted HD video dataset, can you give me some suggestions on the details? Should I do resize or cropping to make the HD video dataset same as Vimeo90K dataset?

Thank you in advance for your reply.

Color inconsistency of the HD dataset

Hi, I was trying to compare the results of the provided HD dataset results. However, I found the color is inconsistent with the ground truth from MEMC. For example:

DAIN_resized

orig

Can you double check the issue? Also, are the PSNR&SSIM reported in the paper are calculated on gray scale images for HD?

Possible Enhancements: Usage of quad-frames instead of bi-frames

Here is a possible way of improving DAIN:
Instead of getting frame X-1 and frame X+1 to get frame X, what about frames X-3, X-1, X+1 and X+3?
With that it can enhance accuracy by acquiring more context, but that leads to some problems:
what if some of the frames are from different scenes? what should it do?
Are there already scene identification for bi-frame interpolation within DAIN?

Is this tool suitable for real-time video interpolation? 请问这个工具适合实时补帧吗?

抱歉打扰了(如果您不介意的话请忽略下面糟糕的中式英语),请问DAIN是否适合实时补帧的场景?如果不适合的话,请您知不知道有别的适合实时补帧、资源占用较低的开源工具呢?我目前急需这类工具,之前看到过您在CVPR上有发表过一篇描述异构系统上进行实时补帧的技术的论文,所以就顺着找了过来,希望能咨询一下您相关领域的东西。我之前有用[email protected]这一地址向您的gmail邮箱发送过咨询的邮件,不知道您是否有收到?如果您不介意的话,可以告诉我您的QQ或者微信号吗?(或者直接用邮件回复?)
如有打扰,敬请原谅。
Sorry to bother you, but I really need to know whether this tool is suitable for real-time video interpolation. I'm working on my graduation project which need a real-time video interpolation tool.
I find that you have published a paper named "High-quality and real-time frame interpolation on heterogeneous computing system" on CVPR, which seems to be a suitable tool but I didn't find its open-source code. Do you know any tools suitable for real-time video interpolation if this project is not what I'm trying to find? 😅

((invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp)) when runnning "python demo_MiddleBury.py"

when running CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
I got the following error
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=405 error=11 : invalid argument Traceback (most recent call last): File "demo_MiddleBury.py", line 131, in <module> y_s,offset,filter = model(torch.stack((X0, X1),dim = 0)) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/data/DAIN/networks/DAIN.py", line 130, in forward cur_filter_input[:, 3:, ...]),dim=0)) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward self.padding, self.dilation, self.groups) RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp:405

PS: my environment is python3.7.3 + cuda9.0 + cudnn7.1.3 + pytorch1.0.0

Issues about the test results in DAIN_HD_videos

Hi,
@baowenbo ,Thank you for your great work and sharing the code. The test results were really surprising, but I found some blurry or not-expected results in DAIN_HD_videos as shown below.
Can you tell why do these happen or some ideas for improvement?

  • some abnormal blocks in interpolated frames
    reliao_img_1554361053188

  • blurry when large displacement appeared
    reliao_1554361136220

Some questions about running on Windows

When I run build.sh in DAIN on windows,
D:\python3.7\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/torch.h(7): fatal error C1021: ▒▒Ч▒▒Ԥ▒▒▒▒▒▒▒▒▒warning▒▒
D:\python3.7\lib\site-packages\torch\utils\cpp_extension.py:184: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
error: command 'D:\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.21.27702\bin\HostX86\x64\cl.exe' failed with exit status 2
But I have configured path for cl.exe,can someone guide me? Thanks!!!!!!!!

PyTorch extension generation step fails with nvcc9.1, gcc-4.9.3 to 7.4.0, pytorch-1.0.0 to 1.12

When trying to install this project, it fails on the PyTorch extension compilation step.
Specifically, /usr/bin/nvcc failed with exit status 1, due to errors in /usr/include/c++/6/tuple and /usr/include/c++/6/type_straits. (actual printout below).

Our system is running ubuntu 18.04, with cuda 9.1 (nvcc V9.1.85), nvidia driver 390.116. (Due to other projects on this workstation, reinstalling graphics drivers or cuda is not really viable...).
Python 3.6.8 is in a local conda environment, with specified conda install cudatoolkit=9.0 cudnn=7.1.2.
Pytorch, I've tried pytorch=1.0.0 and pytorch (no specific version) via conda install, neither seems to resolve the compile error.
Gcc/g++, I've tried 5.5.0, 4.9.3, 6.5.0 and 7.4.0. (4.9 from xenial repositories, since it's no longer available in bionic, and the others from ppa:/ubuntu-toolchain-r/test), switched via update-alternatives. No luck with any.

I've also tried to explicitly link nvcc to specific gcc in the various setup.py scripts by adding '-DCUDA_HOST_COMPILER=/usr/bin/gcc-5' to the nvcc-args list, even that did not work.

Googling for similar issues suggests that pytorch-1.0 extensions don't really work with nvcc/cuda <9.2; however you're suggesting version 9.0 in the instructions...

Any thoughts on how to best resolve this, so that pytorch extensions can compile?

Output from running the ./build.sh in DAIN/my_packages (from one of the setup.py, to avoid replicating the same thing 8 times:

running install
running bdist_egg
running egg_info
creating filterinterpolation_cuda.egg-info
writing filterinterpolation_cuda.egg-info/PKG-INFO
writing dependency_links to filterinterpolation_cuda.egg-info/dependency_links.txt
writing top-level names to filterinterpolation_cuda.egg-info/top_level.txt
writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
reading manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'filterinterpolation_cuda' extension
creating build
creating build/temp.linux-x86_64-3.6
gcc -pthread -B /mnt/Partition2/deeplearning/DAIN/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/TH -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/include -I/mnt/Partition2/deeplearning/DAIN/env/include/python3.6m -c filterinterpolation_cuda.cc -o build/temp.linux-x86_64-3.6/filterinterpolation_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=filterinterpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from filterinterpolation_cuda.cc:1:0:
/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/torch.h:7:2: warning: #warning "Including torch/torch.h for C++ extensions is deprecated. Please include torch/extension.h" [-Wcpp]
 #warning \
  ^
/usr/bin/nvcc -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/TH -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/include -I/mnt/Partition2/deeplearning/DAIN/env/include/python3.6m -c filterinterpolation_cuda_kernel.cu -o build/temp.linux-x86_64-3.6/filterinterpolation_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=filterinterpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
error: command '/usr/bin/nvcc' failed with exit status 1

Undefined names in my_package

Undefined names are usually a sign of a typo, missing imports, or code that has not been ported to Python 3. These would be compile-time errors in a compiled language but in Python a NameError is raised which will halt/crash the script on the user.

flake8 testing of https://github.com/baowenbo/DAIN on Python 3.8.0

$ flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics

./my_package/test_module.py:21:25: F821 undefined name 'SeparableConvFlowModule'
    FilterInterpolate = SeparableConvFlowModule(filtersize)
                        ^
./my_package/test_module.py:125:25: F821 undefined name 'SeparableConvModule'
    FilterInterpolate = SeparableConvModule(filtersize)
                        ^
./my_package/test_module.py:219:25: F821 undefined name 'FilterInterpolationModule'
    FilterInterpolate = FilterInterpolationModule()
                        ^
./my_package/test_module.py:324:19: F821 undefined name 'InterpolationModule'
    Interpolate = InterpolationModule()
                  ^
./my_package/test_module.py:408:19: F821 undefined name 'InterpolationChModule'
    Interpolate = InterpolationChModule(input1.size(1))
                  ^
./my_package/test_module.py:492:15: F821 undefined name 'FlowProjectionModule'
    Project = FlowProjectionModule()
              ^
./my_package/test_module.py:518:15: F821 undefined name 'FlowProjectionModule'
    Project = FlowProjectionModule() # regnenerate
              ^
./my_package/test_module.py:632:23: F821 undefined name 'output'
    x = output_cuda - output.cuda()
                      ^
./my_package/test_module.py:683:15: F821 undefined name 'WeightedFlowProjectionModule'
    Project = WeightedFlowProjectionModule(threshold=20.0/255.0,requires_grad=True)
              ^
./my_package/test_module.py:710:15: F821 undefined name 'WeightedFlowProjectionModule'
    Project = WeightedFlowProjectionModule(threshold=20.0/255.0, requires_grad=True) # regnenerate
              ^
./my_package/test_module.py:770:19: F821 undefined name 'AdaptiveWeightInterpolationModule'
    Interpolate = AdaptiveWeightInterpolationModule(training=training)
                  ^
./MegaDepth/data/image_folder.py:42:78: F821 undefined name 'IMG_EXTENSIONS'
                               "Supported image extensions are: " + ",".join(IMG_EXTENSIONS)))
                                                                             ^
./MegaDepth/data/image_folder.py:131:78: F821 undefined name 'IMG_EXTENSIONS'
                               "Supported image extensions are: " + ",".join(IMG_EXTENSIONS)))
                                                                             ^
13    F821 undefined name 'IMG_EXTENSIONS'
13

https://flake8.pycqa.org/en/latest/user/error-codes.html

On the flake8 test selection, this PR does not focus on "style violations" (the majority of flake8 error codes that psf/black can autocorrect). Instead these tests are focus on runtime safety and correctness:

  • E9 tests are about Python syntax errors usually raised because flake8 can not build an Abstract Syntax Tree (AST). Often these issues are a sign of unused code or code that has not been ported to Python 3. These would be compile-time errors in a compiled language but in a dynamic language like Python they result in the script halting/crashing on the user.
  • F63 tests are usually about the confusion between identity and equality in Python. Use ==/!= to compare str, bytes, and int literals is the classic case. These are areas where a == b is True but a is b is False (or vice versa). Python >= 3.8 will raise SyntaxWarnings on these instances.
  • F7 tests logic errors and syntax errors in type hints
  • F82 tests are almost always undefined names which are usually a sign of a typo, missing imports, or code that has not been ported to Python 3. These also would be compile-time errors in a compiled language but in Python a NameError is raised which will halt/crash the script on the user.

Implementation of functions in "my_package" on CPU

Hi~, in your code, it allows user to use cpu in modules of my_package dir, such as DepthFlowprojection etc. But I didn't find that implementation, (DepthFlowProjectionLayer_cpu_forward()). Did I miss it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.