swz30 / cycleisp Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2020--Oral] CycleISP: Real Image Restoration via Improved Data Synthesis
License: Other
[CVPR 2020--Oral] CycleISP: Real Image Restoration via Improved Data Synthesis
License: Other
Thank you for your awesome code!
I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch
(and/or batch) with an estimate of the runtime?
I choose adobe fivek_dataset one picture “a3501-dgw_154.dng”, use rawpy to get raw picture(RGBG) and sRGB image(all crop to 512*512), but the result has artifact。here is the code:
def read_dng():
with torch.no_grad():
path = r'E:\MachineLearning_Data\fivek_dataset\raw_photos\HQa3501to4200\photos\a3501-dgw_154.dng'
raw = rawpy.imread(path)
raw_img = raw.raw_image_visible
print(np.max(raw_img))
width, height = 512, 512
start_w, start_h = 1000, 1000
raw_img_save = raw_img[start_h:start_h+height, start_w:start_w+width]
# raw_img_save.tofile('a3501-dgw_154_crop.raw')
raw_img_save = pack_raw(np.expand_dims(raw_img_save/(2**14-1), -1))
raw_img_save = torch.tensor(raw_img_save, dtype=torch.float32).unsqueeze(0).permute(0, 3, 1, 2).cuda()
img_rgb = raw.postprocess(no_auto_bright=True, user_wb=raw.daylight_whitebalance)
img_rgb_save = img_rgb[start_h:start_h+height, start_w:start_w+width, :]
cv2.imwrite('a3501-dgw_154_crop.jpg', img_rgb_save[..., ::-1])
img_rgb_save = torch.tensor(img_rgb_save/255.0, dtype=torch.float32).permute(2, 0, 1).cuda()
ccm_tensor = model_ccm(img_rgb_save.unsqueeze(0))
rgb_noisy = model_raw2rgb(raw_img_save, ccm_tensor)
rgb_noisy = rgb_noisy.squeeze(0).permute(1, 2, 0)
cv2.imwrite('raw2rgb_rgbg.jpg', ((rgb_noisy*255.0).cpu().detach().numpy()[..., ::-1]).astype(np.uint8))
[read_dng()]
Can you provide the training code?Thank you!
Hello. could u please show us the training part code, thank u~
In the forward method of Raw2Rgb module in cycleisp.py, x = self.body【-1】(x) indicates that x is activated by 'nn.PReLU(n_feats)'. As the Figure-2 of paper show, it should be followed by the k-th RRG block here instead of activtion.
def forward(self, x, ccm_feat):
x = self.head(x)
for i in range(len(self.body)-1):
x = self.body[i](x)
body_out = x.clone()
x = x * ccm_feat ## Attention
x = x + body_out
x = self.body[-1](x)
x = self.tail(x)
x = self.tail_rgb(x)
x = nn.functional.pixel_shuffle(x, 2)
return x
Hi,
I'm trying to do the initial training on the MIT-Adobe FiveK dataset. The raw image is huge which makes the loader slow. Did you do the cropping online?
Thanks.
Any updates on when will you release the training codes?
Hello,
I am wondering why not use the log loss term in the RAW2RGB network, just like in the RGB2RAW network. Are there any principles? Or is it just set empirically?
Hello, I tried to train the DenoiseNet with my own code on an NVIDIA 1080 Ti GPU, but the GPU memory runs out and I can only train it with size of 64 and batch of 1.
I would like to know more training details about the GPU settings, such as GPU numbers, etc.
Thanks for your good work !
And thanks for putting the processed SIDD validation data on Google Drive. I noticed the original SIDD validation's bayer pattern is unknown in the benchmark website, so how did you process them to stack per bayer's data as R_G_G_B sequence?
Could you please share your code for training model or your loss for training model?
Any updates on when will you release the training codes?
Hello,
Great work! I was wondering is it possible to add an MIT license to the code?
Thanks for your good work!
How to test raw images without gt? Without gt, I can not get the variance, then I can not get the restored image.
Can CycleISP reach 52.41dB PSNR on SIDD benchmark or just on validation set?
Even with the pretrained model, I cannot get that result on the offical server.
Line 80 in 76a52aa
To run the code change "from skimage.measure.simple_metrics import compare_psnr " to " from skimage.metrics import peak_signal_noise_ratio as compare_psnr " and
"from skimage.measure import compare_ssim" to
from skimage.metrics import structural_similarity as compare_ssim
in utils/image_utils.py
These are the new version update for skimage
Hi,when I run the 'test_dnd_rgb.py',If I didn't set the Parameter:'--save images',I can ran the code.
However,if I set the Parameter,I met the error:
Traceback (most recent call last):
File "/workspace/Cycle/test_dnd_rgb.py", line 66, in <module>
lycon.save(args.result_dir + 'png/'+ filenames[batch][:-4] + '.png', denoised_img)
File "/opt/conda/lib/python3.7/site-packages/lycon/core.py", line 24, in save
_lycon.save(path, image, options)
SystemError: <built-in function save> returned a result with an error set
Can u tell me why? Is the error of the 'lycon' version?But I only can pip install the version==0.2.0,the other version can't be pip install.
can you share your training code?@swz30
Do you plan to release the code for the training part?
In the Rgb2Raw branch,the feature map before Mosaic is HxWx3
,and you use this code to generate final raw output:
def mosaic(images):
shape = images.shape
red = images[:, 0, 0::2, 0::2]
green_red = images[:, 1, 0::2, 1::2]
green_blue = images[:, 1, 1::2, 0::2]
blue = images[:, 2, 1::2, 1::2]
images = torch.stack((red, green_red, green_blue, blue), dim=1)
# images = tf.reshape(images, (shape[0] // 2, shape[1] // 2, 4))
return images
this just use litter data of images
to generate raw, I find some details in your paper in Section3.1,
the Bayer sampling function f_Bayer is applied
So why just output a feature map with shape HxWx1
to full use the data?
I want to know if you only used raw images to train the RGB2RAW Network Branch and RAW2RGB Network Branch. For example, you used clean sRGB images as ground truth for optimizing RAW2RGB network, and I don't know how to get the ground truth.
We need to install cmake to run lycon. Although it is better to use cv2 than lycon
conda install cmake
pip install lycon
do this if you still want to use lycon
Can you provide the training code and dataset?
Thanks
Hello
I want to reproduce the results of this thesis through the code you provided. I don't know when you will open source. I hope you open source as soon as possible to help my research. Thank you very much.
Your work is very good, can you provide the training code
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.