GithubHelp home page GithubHelp logo

rajat95 / deep-deghosting-hdr Goto Github PK

View Code? Open in Web Editor NEW
39.0 6.0 9.0 160.32 MB

Implementation of ICCP 2019 paper 'A Fast, Scalable, and Reliable Deghosting Method for Extreme Exposure Fusion'

License: MIT License

Python 100.00%

deep-deghosting-hdr's Introduction

Deep Deghosting HDR:

This Repository contains code and pretrained models for HDR version of our paper : A Fast, Scalable, and Reliable Deghosting Method for Extreme Exposure Fusion accepted at ICCP, 2019 .
It has been tested on GTX 1080ti and RTX 2070 GPUs and tensorflow 1.13 and contains scripts for both inference and training .

The project was built on Python-3.6.7 and requires following packages

  • affine==2.2.2
  • matplotlib==3.0.2
  • numpy==1.16.2
  • opencv-python==4.0.0.21
  • Pillow==5.4.1
  • scikit-image==0.14.2
  • scikit-learn==0.20.2
  • scipy==1.2.1
  • tensorboard==1.13.1
  • tensorflow-gpu==1.13.1
  • termcolor==1.1.0
  • tqdm==4.31.1

Inference Instructions:

Use script infer.py to perform inference. The script expects :

  1. A directory containing set of multi-exposure shots, lebeled as 1.tif, 2.tif, 3.tif and a file exposure.txt listing out EV gaps between the images.
  2. Pretrained flow, refinement and fusion models.
  3. The choice of fusion model: tied (works for any number of images) or untied model (fixed number of images).
  4. The image to choose as reference (1st or 2nd)
  5. GPU id to choose the gpu to run the script on.

Note :

  1. To fit everything in single script, unofficial PWC-NET implementation available in this repository has been used, but you can use any other official implementation to precompute flows as well.
  2. The script is meant for 3 multi-exposure shots but can easily be extended to arbitrary number of inputs along similar lines.

Sample Command:

python infer.py --source_dir ./data_samples/test_set --fusion_model tied --ref_label 2 --gpu 1

Training Instructions:

Script train_refine.py trains refinement model.

Description of inputs to the script:

  1. train_patch_list : list of training images. Download them from (Link to be updated soon). Use a pretrained flow algorithm to precompute flow as numpy files and save them as flow_21.npy and flow_23.npy. Refer to file data_samples/refine_train.txt and directory data_samples/refine_data for sample
  2. val_patch_list : list of test images organized similarly.
  3. logdir : checkpoints and tensorboard visualizations get logged here.
  4. iters : number of iterations to train model for.
  5. image_dim : dimensions of input patch during training
  6. batch_size : ---do----
  7. restore : 0 to start afresh, 1 to load checkpoint
  8. restore_ckpt: if restore was 1, path to checkpoint to load
  9. gpu : GPU id of the device to use for training.

Script train_static_fusion.py trains fusion model.

Description of inputs to the script:

Note: Use pretrained refinement model to generate static version of training images

  1. train_patch_idx : list of training images. Download them from here. Refer to file data_samples/fusion_train.txt and directory data_samples/fusion_data for sample.
  2. test_patch_idx : list of test images.
  3. fusion_model : choose between untied and tied fusion model.
  4. logdir : checkpoints and tensorboard visualizations get logged here.
  5. iters : number of iterations to train model for.
  6. lr : initial learning rate
  7. image_dim : dimensions of input patch during training
  8. batch_size : ---do----
  9. restore : 0 to start afresh, 1 to load checkpoint
  10. restore_ckpt: if restore was 1, path to checkpoint to load
  11. gpu : GPU id of the device to use for training.
  12. hdr : set 1 if you want to concatenate corresponding hdr with inputs ldrs
  13. hdr_weight : weight to mse loss between tonemapped hdr outputs.
  14. ssim_weight : weight for MS-SSIM loss
  15. perceptual_weight: Weight for perceptual loss

deep-deghosting-hdr's People

Contributors

kramprabhakar avatar rajat95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deep-deghosting-hdr's Issues

Unable to download the model

Thank you for your sharing. The model download link in Checkpoint cannot be opened. Could you please provide a new download link? Thank you!

Issue about arbitrary input exposure

Hi @rajat95 @KRamPrabhakar

I use the tied network with the corresponding parameter to test a five exposure sequence (kindly refer to the BabyOnGrass scene in Sen's paper:https://web.ece.ucsb.edu/~psen/hdr_stuff/Scenes.zip). However, the result is prone to ghosting artifacts, as shown in the blow.

BabyOnGrass_output_Detailed

Thus,I am confused whether our mistaken reproduction caused this problem since the code of an arbitrary number of inputs is not available. If the result has any questions, please let me know.

If possible, could you provide the testing code, which accepts arbitrary exposures, or the results (.hdr) on Sen's dataset (https://web.ece.ucsb.edu/~psen/hdr_stuff/Scenes.zip)?

It is really appreciated any help you can provide.

Best,
Hanwei

When inputing a single image

I appreciate your work very much. I would like to ask you about the reconstruction of a single image, because the model is suitable for any number of inputs, so can the network be extended to the reconstruction of a single image without considering optical flow correction? Thank you.

Invalid link

Dear @rajat95 @KRamPrabhakar ,

Thanks for you excellent job. the homepage has broken down for a long time. So I am here to request for another valid link to access the dataset and pre-trained parameters, for example, google drive. Thanks in advance.

Best,
Hanwei

Key DeepFuse/downsample0/bias not found in checkpoint

Hi, I have downloaded the pre-trained model and run your code with the command “python infer.py --source_dir ./data_samples/test_set --fusion_model tied --ref_label 2 --gpu 0”, but got the error:
"Key DeepFuse/downsample0/bias not found in checkpoint
[[node save_2/RestoreV2 (defined at infer.py:153) ]]"

Is there anything wrong with the model?
Thanks!

train refinenet problem

i trained the refinenet used the kalantari_dataset (input_flies, i did not use the aligned files). i wonder if you used the train_refine.py to train the model, i concat the optical flow as the input , which you used in infer.py , i have trained(55001,155001) iters ,but i don't think the result is good as the pretrained model , here is my output for ManStanding
map1
map2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.