GithubHelp home page GithubHelp logo

alex04072000 / singlehdr Goto Github PK

View Code? Open in Web Editor NEW
522.0 522.0 89.0 175.53 MB

[CVPR 2020] Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

Python 71.66% HTML 25.22% CSS 2.35% JavaScript 0.77%

singlehdr's People

Contributors

alex04072000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

singlehdr's Issues

HDR-synth LDR set

Thanks for sharing your great work.
Project website provides HDR-Synth HDR set only.
Can you please share HDR-Synth LDR set ?
Or can you please share the python script for creating LDR set from HDR set?

Thanks

A question for debug

When I run 'train_dequantization_net.py', there is a error showing 'the following arguments are required: --logdir_path, --hdr_prefix'.
How to debug this problem?

About download of HDR-Eye dataset

Hi, @alex04072000

I'm sorry that this issue has nothing to do with your paper.

I try to download HDR-Eye dataset using FTP https://www.epfl.ch/labs/mmspg/downloads/hdr-eye/
However, it seems that the server is probably down since it returns a 421 HTTP response code.

So, would you share the original HDR-Eye dataset?
(Actually, I know that you have already shared some generated images of the HDR-Eye dataset. I'd like to use multi-exposure images.)

Best,

File Missing

'''bash
!python "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\train_dequantization_net.py" --logdir_path ["E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\weights\ckpt_deq\"] --hdr_prefix ["E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR_training_data\HDR-Synth\"]
'''

' ' '
2021-07-08 12:49:10.036892: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-07-08 12:49:10.036950: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\train_dequantization_net.py", line 8, in
from dataset import get_train_dataset, RandDatasetReader
File "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\dataset.py", line 171, in
a_dataset_test_posfix_list = _load_pkl('a_dataset_test')
File "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\dataset.py", line 164, in _load_pkl
with open(os.path.join(CURR_PATH_PREFIX, name + '.pkl'), 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\a_dataset_test.pkl'
' ' '

Release train code to update to Tensorflow 2.0

I'm having trouble running the script, I think it's being caused by incompatibilities between the versions of packages installed by pip and those installed by anaconda.

Could you provide the training code so I can update to Tensorflow 2.0?

tensorflow.contrib error

import tensorflow.contrib.slim as slim
ModuleNotFoundError: No module named 'tensorflow.contrib'

How to display the output HDR on the HDR display?

Thanks for sharing your great work.
I have a question about the output of your network is it ready to be shown on the hdr display?
Is it in the light domain or perceptual domain? in other words do I need to perform de-normalization to make it work for the HDR displays?
Also, I have the same question about the domain of the existing approaches you compared your approach with, what are the domains of those (such as hdrcnn, expandnet, and drtmo and others)

Could you explain how to read and write HDR file

Hello. I am studying HDRI.
So I read your cool paper. Thank you for your work.
I am wondering how you read HDR file as ground-truth for training. Also I wonder how do you write result as HDR format file. Thank you for help.

about network parameters

Hi, nice work,
I am wondering whether there is any network parameter comparison with previous methods?
How much is the overall computation cost of SingleHDR?
Thanks for the help.

Training Linearization Net

Hi @alex04072000

I've a question about training Linearization Net. I see that -
First, you apply a randomly selected CRF crf (and its corresponding inverse CRF is invcrf) to convert an HDR image into a LDR image
image
Second, after some processing of the LDR image, you predict the inverse CRF from it (pred_invcrf) which is matched with the true inverse CRF (invcrf)
image

I could be wrong, but I think an HDR image is not necessarily similar to a RAW image, which basically means that it could already have some non-linear CRF of its own. Now, when you further apply crf, the non-linear CRF that would occur in the LDR image would also factor in the already existing non-linear CRF in the HDR image. This means that the "correct" inverse CRF expected from the LDR image cannot be invcrf. Could you please let me know if I'm understanding anything incorrectly?

Ofcourse, if in your dataset, you have made all HDR composites using RAW images, then it's completely fine. Thanks!

Training scripts

Thanks for the valuable code and dataset. Is it possible to provide the training code for better reproducibility, as there are some training hyper-parameters. Thanks very much.

Question about 'Joint training of the entire pipeline and Refinement-Net'

Hi, I encounter a problem when I run the command in ''Joint training of the entire pipeline and Refinement-Net'
image
I directly load the pre-train model trained on the synthetic data. The loss doesn't diminish during the following training and the model generates blank images after several iterations.

image

Some points may be helpful to figure out the cause of the problem:

  1. There seems to be an inconsistency between the code and the article. According to the article, Ltv and Lp are used in the fine-tune stage.
    image
    image
    However, in 'finetune_real_dataset.py', I couldn't find Lp or Ltv:
    image

  2. In 'finetune_real_dataset.py', all weights of the four models are updated, should some of them be fixed?

Thanks a lot!

about train data formats

Hi,please,RAW image is dng format, can be directly used for training?How does the code read dng images?

Temporary link not available

Hello, the temporary link you published is invalid. Could you please publish the link of dataset and pre-training model again?Thanks a lot

Ground-truth CRFs and CRF evaluation

Hi @alex04072000

Thanks for your publishing your code and datasets. Going through the training and test datasets, I could not find any information about the ground-truth CRFs (or the cameras from which the images are taken)? If so, how can we evaluate the estimated CRFs? Could you please help here?

Also, in your main and supplementary papers, for the CRF evaluation and ablation study, which dataset(s) has(ve) been used?

Best!

TypeError: __int__ returned non-int (type NoneType)

Hi,
Thanks for your excellent work, however, there are some problems when I try to run the test code.
It appear 'TypeError: int returned non-int (type NoneType)', seems some problem in the Upsample layer in deconcv-layer from hal-net.
I use tensorflow 1.10.
Could you give me some advice on it?

Thanks!

CRFs dataset

Hi,

Thank you for your great work. I wonder if you could share the CRF dataset you used for training?

Thanks!

Question about HDR-VDP-2 scores

Thank you for your work.
I have a question about HDR-VDP-2 scores.

I have tried to get HDR-VDP-2 scores with given HDR-eye results in your project website, but i can't get 53.16 scores in paper.
Could you tell me how can i get the same result as paper?

Here is the code i used.

reference = hdrimread(reference_path);
test = hdrimread(test_path);

1. res = hdrvdp(reference, test, 'rgb-bt.709', 30);
2. ppd = hdrvdp_pix_per_deg( 24, [512 512], 0.5 );
    res = hdrvdp(reference, test, 'rgb-bt.709', ppd);

Question about HDR-VDP Q-score , PSNR, SSIM of HDR-Real test set

Thank you for your great paper.
I am writing to ask a question.

I have tried to get HDR-VDP-2 scores, PSNR, SSIM with given HDR-Real test set.
And I also refered to your code which you mentioned in a closed issue as below.
#4 (comment)

I could get same result with your paper at HDR-Eye dataset.
But with HDR-Real test set, I got a result as below.

HDR-VDP Q-score : 50.134
PSNR : 22.35
SSIM : 0.737

I think it is quite lower than the values in your paper. Especially SSIM and HDR-VDP score.
I used the original matlab code of HDR-VDP 2.2.2 and HDR-VDP 2.2.1 as well. But the result was same.
And when I tried to get PSNR, SSIM, I used Photomatix balanced tonemapping method and matlab psnr, ssim method.

Did I miss something?
Could you let me know how to get the same result as paper please?
Thank you.

Question about HDR-Real dataset

Hi, I am glad to see your beautiful works here.

I've got a question about HDR-Real dataset

How can I get the subset of the HDR-Real dataset you had used?

I could find the statistic of the HDR-Real dataset is 410 HDR images, 3,974 LDR image for training and 70 images, 919 images for testing in supplementary.

However, what I have got is about 10k pairs of HDR/LDR for training (2k for testing).

I know there are various versions of datasets in this research area, but I need your help.

Thank you,

About CRF How Can I Get From Single Image From My Camera

Could you Kindly give me any advice about How can I use your code and modify it to get the CRF vector of my camera? Can I use the ckpt and load my picture Or I should retrain the network, And where can I find the code place I should modify to get the CRF for RGB channels.

Which tone mapping algorithm did you use?

Hi, @alex04072000

I try to reproduce to visualize HDR images with the HDR-Eye dataset.
However, I can not get the same results...

Please tell me which tone mapping algorithm you did?

Examples.
Use mu_law (below codes)
ours

Use Photomatix
スクリーンショット 2021-09-01 2 45 00

Your result
ours

Codes.

import numpy as np
import cv2

def mu_law(x, mu=10.):
    return np.log(1.0 + mu * x) / np.log(1.0 + mu)

def log(x):
    return np.log(x)

def main():

    FILENAME = 'gt'
    
    HDR_PATH  = f'/src/00004/{FILENAME}.hdr'

    img = cv2.imread(HDR_PATH) / 255.

    img = mu_law(img)
    # img = log(img)

    print(img.shape, img.max(), img.min())
    cv2.imwrite(f'./{FILENAME}.jpg', img * 255.)


if __name__ == '__main__':
    main()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.