dy112 / lsmi-dataset Goto Github PK
View Code? Open in Web Editor NEW[ICCV'21] Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination
Home Page: https://www.dykim.me/projects/lsmi
[ICCV'21] Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm under Mixed Illumination
Home Page: https://www.dykim.me/projects/lsmi
For visualising coefficient map(.npy file) , I have changed VISUALIZE to True. After making this changes in make_mixture_map file ,I run this file then it is showing ValueError: could not broadcast input array from shape(6000,8000) into shape (1500,2000),line 214, in apply_wb_raw.
in preprocess data file all the obtained output image are black in color
it's not the white balanced image which we need for training the model
hi,
your work is great! and I want to know that how to sign the label for colorchecker's location? u use the some model like yolo?or segmentation? or you use your hand to sign the every color checker? I wish your response thx..
Hello, may I ask whether the average Angle error result in the paper is the MAE_illum result or the MAE_rgb result in the public code?
Looking forward to your reply
Hello author, I am very interested in your article and methods and would like to ask some questions about data collection.
I would like to know if there are any restrictions or requirements on the brightness of different light sources, the difference in color temperature, and the position of color cards when collecting data? Because I previously used a camera to capture RGB data without AWB to generate GT, but this time with the same code and equipment, I am unable to implement the correct GT. Where should I consider this issue? Also, I would like to confirm with you that pic12- pic1=pic2. Pic12 and Pic1 here are both RGB graphs without AWB, right?
i wish your responed...thxxx
Hello, in the process of generating sony data set gt with the code you provided, it was found that there were nan data in the meta-. json file, and the results generated according to files 0 and 1 and 2 could not reproduce your paper correctly. How can we solve this problem?
Looking forward to your reply.
Hello
is there any JSON file including train/val/test splits you can share publicly so that your single illuminant and mixed illuminant results can be reproduced?
I noticed that the files you shared in separate_json_files are only consisting of multi-illuminant images.
hi,
dongyoung,I would like to ask you a question.
i found that you make all images resized to 256×256 images as used in U-Net.i want to know why resized to 256,if i want use 4k(3840,2160) to train unet, Will this have any impact? and , if i train use 256 but when i test image, i use 4k image, Will it affect accuracy?
In short, what is the size of my training that I want to apply to 4k images? I saw that you have a crop operation, and I am very puzzled why do you still need to resize the original image after the crop? I would like to hear your suggestions and look forward to your response.
感谢您的优秀作品,使人大有收获。我想问的是,我该如何对一张任意的图像来实现校正呢,期待您的回复
I have a few questions for which i have not found any documentation.
Hi DongYoung,
I met some problem when using the generated the mixture map from Nikon D810 dataset. Could you check the scene I put in following? It's a kinda weird illumination contour. The image is Place550_12, in Nikon dataset. I do have a different result both from rawpy and own prediction at the blueish color cast area.
Thanks
Could you please share JSON files for nikon and sony subsets separately from RAW data zips? Especially the one for metadata, namely "meta.json".
We could extract the RAW data without downloading z01 and z02 files by using some unzipping tricks, however, it does not produce JSON files for the metadata. This leads to some issues in preparing mixture maps.
We do not have storage space for downloading all of them at that time. So, if you have any chance to share JSON files for nikon and sony subsets, we would be very happy!
hello, can you release the code of HDRNet training under this dataset?
To obtain sRGB images of ground truth,I processed the images in two ways with your scripts. For detail:
1st: run '1_make_mixture_map.py' with 'VISUALIZE=True'. And I got GT sRGB image, e.g. 'Placexxx_wb12.png'.
2nd: run '2_preprocess_data.py' with full resolution(i.e. SQUARE_CROP = False, SIZE = None, TEST_SIZE = None), and got GT tiff image, e.g. 'Placexxx_12_gt.tiff'. Then processed it with 'visualize.py' and got GT sRGB image, e.g. 'Placexxx_12_gt.png'.
HOWEVER, I compared these two sRGB images, and they are different.
What causes this and what should I do to fix it?
Looking forward to your reply.
Based on your analysis,this Sony sensor seems a bit odd. I noticed it is a 14-bit depth sensor, but if that's the case, its black level should be closer to 512. However, according to your analysis, 128 is what allows for correct image output, which suggests that you might have used a 12-bit mode for saving. In other words, the effective bit depth of this Sony sensor's RAW data is actually 12 bits, right?
Originally posted by @shuwei666 in #12 (comment)
Hello,
In dataloader_v4.py:
if self.random_color and self.split=='train':
augment_chroma = self.random_color(illum_count)
ret_dict["illum_chroma"] *= augment_chroma
tint_map = mix_chroma(uncalculable_masked_mixmap,augment_chroma,illum_count)
input_rgb = input_rgb * tint_map # apply augmentation to input image
random augmentation to input image is applied here. However, when tint_map has negative values, input_rgb image would have also negative values and then:
def rgb2uvl(img_rgb):
"""
convert 3 channel rgb image into uvl
"""
epsilon = 1e-8
img_uvl = np.zeros_like(img_rgb, dtype='float32')
img_uvl[:,:,2] = np.log(img_rgb[:,:,1] + epsilon)
img_uvl[:,:,0] = np.log(img_rgb[:,:,0] + epsilon) - img_uvl[:,:,2]
img_uvl[:,:,1] = np.log(img_rgb[:,:,2] + epsilon) - img_uvl[:,:,2]
return img_uvl
image_rgb
goes into this function and logarithm gets negative values. Therefore, img_uvl
got NaN values.
How would you rain and obtain your results with using augmentation without having NaN issue mentioned here?
A workaround might be limiting tint_map
matrix to have 0 for negative elements of itself but this might affect the reproducing your results badly.
Looking forward to hearing from you and thank you!
Cem
Hi,
I am using Sony dataset for my task at hand.
I'm following your provided code. I have query related to mixture map part where you are defining camera and RAW_EXT (for saturation). I have done modification in CAMERA variable with path (D:/Sony). For RAW_EXT am I supposed to use CAMERA.dng file (provided on GitHub "sony.dng") or I have to replace that with "CAMERA.arw".
In later case I'm getting error "input/out error".
Hello,
in preprocessing_data.py code, there is a LINE
/
img_wb = img / illum_map
This is given in the paper as
Iab(WB) = Iab / Lab
where Lab
is illum_map. [equation 1]
Illum map is calculated as alpha* La + (1-alpha)*Lb = Lab
In this case, if alpha = 0, then Lb = Lab
Lb
is an RGB vector. In the case there is "zero" chromacity in even one channel of RGB, then Lb
would have an element of "0", causing a zero division in [equation 1].
This issue is arising, for example, in galaxy dataset on illum_map[1143,376,2] pixel value which is the blue one,
How to overcome this issue? Should there be a workaround in calculating alpha, say, alpha cannot become 0, or should there be a minimum chromacity value in illuminant RGB vectors other than simply "0"?
Thanks!
Hi
I am interested in using your dataset.
Unfortunately link provided on github page doesn't work anymore.
Google Drive provided on your official website doesn't work as well.
Is it possible to fix it?
Aleksander Kempski
hi ,
i found that i can't download pre-trained model ,your link can't open...can you reshare your pre-trained model? thank you so much..
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.