GithubHelp home page GithubHelp logo

zhangmozhe / deep-exemplar-based-video-colorization Goto Github PK

View Code? Open in Web Editor NEW
331.0 331.0 73.0 470.01 MB

The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Home Page: https://arxiv.org/abs/1906.09909

License: MIT License

Python 100.00%
colorization computer-vision deep-learning gan generative-adversarial-network image-generation image-processing old-photo pytorch

deep-exemplar-based-video-colorization's People

Contributors

zhangmozhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-exemplar-based-video-colorization's Issues

illustrative training data

Could you please release a tiny illustrative training dataset, such that the preparation of a custom training data can be easily followed. Currently, it is not easy to prepare a custom training data by reading the train.py.
or could you please give a further explanation of the following fields?
(
image1_name,
image2_name,
reference_video_name,
reference_video_name1,
reference_name1,
reference_name2,
reference_name3,
reference_name4,
reference_name5,
reference_gt1,
reference_gt2,
reference_gt3,
)
Thank you very much.

Error on tensors size when inputing actual image size

Hello,

First of all, congrats for this amazing work, and thank you for sharing it.

When using the attribute --image_size and specifying the actual size of my input frames (720x964) in order for them not to be cropped, I get the following error :
Sizes of tensors must match except in dimension 3. Got 120 and 121 (The offending index is 0)
I tried numerous values and got this kind of error almost systematically. After a lot of trial and error, it seems that values matching the pattern 16 * p x 32 * q works (p and q integers).

Do you have any idea of what could be the cause of this error and if I am doing something wrong?

Thanks a lot.

NB : there is a small typo in README.md in the Test section : image-size instead of image_size.
image

Wrong output aspect ratio

Regardless of input aspect ratio - output always 16x9.
Tried to process 4x3 sample video, however output video is 16x9 cropped top/bottom.

Training codes

Thanks for sharing the work!
Will the training pipeline be released?

Dataset for Training

how to train the model can i know each file what does it have and the format of each one to run the train successfully

CUDA device error "module 'torch._C' has no attribute '_cuda_setDevice'" when running test.py

Hi !

Trying out test.py results in the following error:

Traceback (most recent call last): File "test.py", line 26, in <module> torch.cuda.set_device(0) File "C:\Users\natha\anaconda3\envs\ColorVid\lib\site-packages\torch\cuda\__init__.py", line 311, in set_device torch._C._cuda_setDevice(device) AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'

I tried installing pytorch manually using their tool https://pytorch.org/get-started/locally/ (with CUDA 11.6) but that doesn't resolve the issue.

Can someone help me understand what is going on ? Thanks !!

training command is wrong.

The original training command is
python --data_root [root of video samples] \ --data_root_imagenet [root of image samples] \ --gpu_ids [gpu ids] \
Maybe it should be
python train.py --data_root [root of video samples] \ --data_root_imagenet [root of image samples] \ --gpu_ids [gpu ids] \
?

Error 404 - Important files missing

I was working with the Colab program and there appears to be important models / files missing.
As a result the program has ceased to function. I've brough to the designers attention so hopefully will be resolved.

Runtime error

Getting a runtime error when running the test cells at the 'first we visualize the input video'. I'm not good with code, but this is the first time I've experienced this issue with this wonderful program. No cells execute after this error. I've attached a screenshot.
IMG_7258

Documentation for starting the library

Hello everyone! I ran into a problem launching a library from your repository, but after some experimentation, I managed to get it up and running. I want to share my experience so that other users do not waste their time looking for a solution.

conda create -n ColorVid python=3.6
conda activate ColorVid

pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html 

In «requirements.txt» replace: opencv_contrib_python>=4.1.0.25
To: opencv_contrib_python==4.1.0.25

pip install -r requirements.txt

python test.py --clip_path ./sample_videos/clips/v32 \
               --ref_path ./sample_videos/ref/v32 \
               --output_path ./sample_videos/output

conda deactivate

I'm sure this will improve the user experience and help new users get started with your library faster.

Thank you!

Update code

Pytorch, the older version does not exist or are avalible for install. Is it possible to update the script so it works?

the test code

Thanks for your great work! I have a question when i run test.py. Why don't you extract the feature of inference image out of the for loop. I haven't found any difference.

Training has little effect

Hello, I read in the paper that "we train the network for 10 epichs with a batch size of 40 pairs of video frames. "
Is it effective after only 10 iterations? Is your data 768 videos, 25 frames per video? I only train one video at present, epoch=40, but I find that it has little effect. What may be the reason?

Training data problem

  • Hello, I have successfully carried out the colorization test, but because my data set is infrared data set, the effect is not good, so I want to re train the model, but when I make my own data, I find that I still need some txt files, such as pairs_output_new.txt, because I don't know the txt file format, I can't carry out training. Can you provide the files and sample formats required for training?

I would appreciate your reply!!!

CUDA OOM

Hello, I am running a 4gb nvidia GPU. Is that enough for inference? I try to run on ubuntu 18.04 as well as windows but always get a Out of memory error eventually. Sometimes happen after 2nd image and sometimes after 5th. This is 1080p video.

Wrong output resolution

Processing 4x3 video 912x720 outputs cropped and downscaled 16x9 768x432.
Playing around "python test.py --image-size [image-size] " doesn't help
My be I don't properly specify an arguments?
So, what the the proper use of --image-size [image-size] in order to get 912x720?
Greatly appreciate for suggesting.

the pretrained models

Can you help me about downloading the pretrained models? because it does not exist in link that you upload.

video size very low

video colorization very good and very impressive . but render image low size 768x432 . and video size also same. how to in increase the image size and video size. thank you..

Train the model

I have no idea to train a new model on my dataset, if the author can provide a tiny dataset sample.

Test result problem

At present, after training, it is found that the generated test image is effective, but the color saturation is very low. Is it because of the colored model or other reasons?
I'm looking forward to your reply!!!

Colorization result was different if changing the scale_factor=0.5 to 1.0 in test.py . Not sure why?

This post is actually a question about the scale_factor.

This is a great project! The colorization produced a very stable output from a sequence of frames!

Experimenting the scale_factor with a set of low resolution frames with 480 x 640 resolution gave me a supersizing result!

The code worked fine When changing the scale_factor=0.5 to 1.0 in test.py before colorization and also changing the scale_factor=2.0 to 1.0 after colorization.

The video from scale_factor 1.0 is less stable especially have more yellow regions than the default scale_factor of 0.5.

I understand that scaling down/up will not affect the viewing quality of the colored image.

The fist question is why the colorization become less stable by changing scale_factor in test.py?

It appears the scale_factor variable is also present in the models/ColorVidNet.py and models/NonlocalNet.py
The value in these files was not passed from test.py but may be related to value in test.py?
Does the scale_factor needs to be changed in thee files too? If so, how?

Thank you very much!

Questions about the test phase

Thanks for your outstanding work!
I have some questions when I read it.

  1. What are the settings when you test this video model on image colorization which used for comparing with other image colorization methods?
  2. Could you please give me a url about your video testset (116 video clips collected from Videvo)?
    Thanks again for your attention.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.