alex04072000 / singlehdr Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2020] Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline
[CVPR 2020] Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline
This link in google colab is broken
https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR_/code_and_ckpt.zip
Google colab link
https://colab.research.google.com/drive/1WzNaGSaucF2AMDSdUCBMEOauBg4IowMa
Thanks for sharing your great work.
Project website provides HDR-Synth HDR set only.
Can you please share HDR-Synth LDR set ?
Or can you please share the python script for creating LDR set from HDR set?
Thanks
When I run 'train_dequantization_net.py', there is a error showing 'the following arguments are required: --logdir_path, --hdr_prefix'.
How to debug this problem?
Thanks for the great paper and code sharing. It seems the pretrained model link provided does not respond.
Hi, @alex04072000
I'm sorry that this issue has nothing to do with your paper.
I try to download HDR-Eye dataset using FTP https://www.epfl.ch/labs/mmspg/downloads/hdr-eye/
However, it seems that the server is probably down since it returns a 421 HTTP response code.
So, would you share the original HDR-Eye dataset?
(Actually, I know that you have already shared some generated images of the HDR-Eye dataset. I'd like to use multi-exposure images.)
Best,
'''bash
!python "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\train_dequantization_net.py" --logdir_path ["E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\weights\ckpt_deq\"] --hdr_prefix ["E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR_training_data\HDR-Synth\"]
'''
' ' '
2021-07-08 12:49:10.036892: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-07-08 12:49:10.036950: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\DELL\anaconda3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\train_dequantization_net.py", line 8, in
from dataset import get_train_dataset, RandDatasetReader
File "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\dataset.py", line 171, in
a_dataset_test_posfix_list = _load_pkl('a_dataset_test')
File "E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\dataset.py", line 164, in _load_pkl
with open(os.path.join(CURR_PATH_PREFIX, name + '.pkl'), 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'E:\Darsh\DP\Internship&Works\Wielding Helmet\SingleHDR-master\training_code\a_dataset_test.pkl'
' ' '
I'm having trouble running the script, I think it's being caused by incompatibilities between the versions of packages installed by pip and those installed by anaconda.
Could you provide the training code so I can update to Tensorflow 2.0?
import tensorflow.contrib.slim as slim
ModuleNotFoundError: No module named 'tensorflow.contrib'
Thanks for sharing your great work.
I have a question about the output of your network is it ready to be shown on the hdr display?
Is it in the light domain or perceptual domain? in other words do I need to perform de-normalization to make it work for the HDR displays?
Also, I have the same question about the domain of the existing approaches you compared your approach with, what are the domains of those (such as hdrcnn, expandnet, and drtmo and others)
Hi, when I run the code, there is an error 'Segmentation fault (core dumped)'. How to fix this problem?
It seems that you don't provide this pretrained model.
Hello. I am studying HDRI.
So I read your cool paper. Thank you for your work.
I am wondering how you read HDR file as ground-truth for training. Also I wonder how do you write result as HDR format file. Thank you for help.
Hi, nice work,
I am wondering whether there is any network parameter comparison with previous methods?
How much is the overall computation cost of SingleHDR?
Thanks for the help.
I've a question about training Linearization Net. I see that -
First, you apply a randomly selected CRF crf
(and its corresponding inverse CRF is invcrf
) to convert an HDR image into a LDR image
Second, after some processing of the LDR image, you predict the inverse CRF from it (pred_invcrf
) which is matched with the true inverse CRF (invcrf
)
I could be wrong, but I think an HDR image is not necessarily similar to a RAW image, which basically means that it could already have some non-linear CRF of its own. Now, when you further apply crf
, the non-linear CRF that would occur in the LDR image would also factor in the already existing non-linear CRF in the HDR image. This means that the "correct" inverse CRF expected from the LDR image cannot be invcrf
. Could you please let me know if I'm understanding anything incorrectly?
Ofcourse, if in your dataset, you have made all HDR composites using RAW images, then it's completely fine. Thanks!
Thanks for the valuable code and dataset. Is it possible to provide the training code for better reproducibility, as there are some training hyper-parameters. Thanks very much.
Current code saves the result as HDR image. How can I convert to a LDR and save it for display?
Thanks,
Kaishi
Hi, I encounter a problem when I run the command in ''Joint training of the entire pipeline and Refinement-Net'
I directly load the pre-train model trained on the synthetic data. The loss doesn't diminish during the following training and the model generates blank images after several iterations.
Some points may be helpful to figure out the cause of the problem:
There seems to be an inconsistency between the code and the article. According to the article, Ltv and Lp are used in the fine-tune stage.
However, in 'finetune_real_dataset.py', I couldn't find Lp or Ltv:
In 'finetune_real_dataset.py', all weights of the four models are updated, should some of them be fixed?
Thanks a lot!
Thank you for your efforts about this work firstly! I wanna to know can this method apply to real-time Video?
Hey just want to let you know that the Link to download the pre-trained models is down:
https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR_/ckpt.zip
Hi,please,RAW image is dng format, can be directly used for training?How does the code read dng images?
Nice job, however, the HDR images are still limited. Could you please provide the Real HDR dataset you captured. Thanks a lot.
Hello, the temporary link you published is invalid. Could you please publish the link of dataset and pre-training model again?Thanks a lot
Thanks for your publishing your code and datasets. Going through the training and test datasets, I could not find any information about the ground-truth CRFs (or the cameras from which the images are taken)? If so, how can we evaluate the estimated CRFs? Could you please help here?
Also, in your main and supplementary papers, for the CRF evaluation and ablation study, which dataset(s) has(ve) been used?
Best!
Hi,
Thanks for your excellent work, however, there are some problems when I try to run the test code.
It appear 'TypeError: int returned non-int (type NoneType)', seems some problem in the Upsample layer in deconcv-layer from hal-net.
I use tensorflow 1.10.
Could you give me some advice on it?
Thanks!
Hi,
Thank you for your great work. I wonder if you could share the CRF dataset you used for training?
Thanks!
This algo very nice, I want to port it on my Android camera app, Is it possible?
good job , could you please provide the version of the python packet ?
Thank you for your work.
I have a question about HDR-VDP-2 scores.
I have tried to get HDR-VDP-2 scores with given HDR-eye results in your project website, but i can't get 53.16 scores in paper.
Could you tell me how can i get the same result as paper?
Here is the code i used.
reference = hdrimread(reference_path);
test = hdrimread(test_path);
1. res = hdrvdp(reference, test, 'rgb-bt.709', 30);
2. ppd = hdrvdp_pix_per_deg( 24, [512 512], 0.5 );
res = hdrvdp(reference, test, 'rgb-bt.709', ppd);
Thank you for your great paper.
I am writing to ask a question.
I have tried to get HDR-VDP-2 scores, PSNR, SSIM with given HDR-Real test set.
And I also refered to your code which you mentioned in a closed issue as below.
#4 (comment)
I could get same result with your paper at HDR-Eye dataset.
But with HDR-Real test set, I got a result as below.
HDR-VDP Q-score : 50.134
PSNR : 22.35
SSIM : 0.737
I think it is quite lower than the values in your paper. Especially SSIM and HDR-VDP score.
I used the original matlab code of HDR-VDP 2.2.2 and HDR-VDP 2.2.1 as well. But the result was same.
And when I tried to get PSNR, SSIM, I used Photomatix balanced tonemapping method and matlab psnr, ssim method.
Did I miss something?
Could you let me know how to get the same result as paper please?
Thank you.
Hi, I am glad to see your beautiful works here.
I've got a question about HDR-Real dataset
How can I get the subset of the HDR-Real dataset you had used?
I could find the statistic of the HDR-Real dataset is 410 HDR images, 3,974 LDR image for training and 70 images, 919 images for testing in supplementary.
However, what I have got is about 10k pairs of HDR/LDR for training (2k for testing).
I know there are various versions of datasets in this research area, but I need your help.
Thank you,
Could you Kindly give me any advice about How can I use your code and modify it to get the CRF vector of my camera? Can I use the ckpt and load my picture Or I should retrain the network, And where can I find the code place I should modify to get the CRF for RGB channels.
Hi, @alex04072000
I try to reproduce to visualize HDR images with the HDR-Eye dataset.
However, I can not get the same results...
Please tell me which tone mapping algorithm you did?
Examples.
Use mu_law (below codes)
Codes.
import numpy as np
import cv2
def mu_law(x, mu=10.):
return np.log(1.0 + mu * x) / np.log(1.0 + mu)
def log(x):
return np.log(x)
def main():
FILENAME = 'gt'
HDR_PATH = f'/src/00004/{FILENAME}.hdr'
img = cv2.imread(HDR_PATH) / 255.
img = mu_law(img)
# img = log(img)
print(img.shape, img.max(), img.min())
cv2.imwrite(f'./{FILENAME}.jpg', img * 255.)
if __name__ == '__main__':
main()
Can you update to latest tensor version?
The current code is not working.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.