(accepted by ICCV2021)
This version of code is used for training on real low-fps data of dvs, which is collected by DAVIS240C. This code can be trained by the visible low-fps frames(12fps) with corresponding events saved in the aedat4 files and interpolate the inbetweens at any time. An aedat4 file is provided in dataset/aedat4, which can be used as a demo to run the whole process.
Sorry for breaking the promise. As some sensitive information about face, human body, number plate and palm print exists in most of the proposed slomoDVS dataset, the dataset and the pretained weight on it do not pass the compliance review policy launched recently by company.
-
cuda 9.0
-
python 3.7
-
pytorch 1.1
-
numpy 1.17.2
-
tqdm
-
gcc 5.2.0
-
cmake 3.16.0
-
opencv_contrib_python
-
compiling correlation module (The PWCNet and the correlation module are modified from DAIN)
a) cd stage1/lib/pwcNet/correlation_pytorch1_1
b) python setup.py install
-
Install apex: https://github.com/NVIDIA/apex
-
For processing DVS file:
a) More detail information about aedat4 file and DAVIS240C can be found in here
b) tools for processing aedat4 file: dv-python
-
For distributed training with multi-gpus on cluster: slurm 15.08.11
You can prepare your own event data according to the demo in DVSTool
- Place aedat4 file in ./dataset/aedat4
- cd DVSTool
- python mainDVSProcess_01.py
It will extract the events and frame saved in .aedat4 into pkl which will be saved in dataset/fastDVS_process - python mainGetDVSTrain_02.py
It will gather the train samples and save in dataset/fastDVS_dataset/train. (A train sample includes I0, I1, I2, I01, I21 and E1) - python mainGetDVSTest_03.py
It will gather the test samples and save in dataset/fastDVS_dataset/test (A test sample includes I_-1, I0, I1, I2, E1/3, E2/3)
cd stage1
a) Modify the config in configs/configEVI.py accordingly
b) python train.py
a) Modify config in configs/configEVI.py accordingly
b) Modify runEvi.py in runBash accordingly
c) python runBash/runEvi.py
cd stage2
Place the experiment dir trained by stage1 in ./output
a) Modify the config in configs/configEVI.py accordingly, especially the path in line 28, 29
b) python train.py
a) Modify config in configs/configEVI.py accordingly, especially the path in line 28, 29
b) Modify runEvi.py in runBash accordingly
c) python runBash/runEvi.py
@InProceedings{Yu_2021_ICCV,
author = {Yu, Zhiyang and Zhang, Yu and Liu, Deyuan and Zou, Dongqing and Chen, Xijun and Liu, Yebin and Ren, Jimmy S.},
title = {Training Weakly Supervised Video Frame Interpolation With Events},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {14589-14598}
}
[styleGAN], [TTSR], [DAIN], [superSlomo], [QVI], [faceShiter]