GithubHelp home page GithubHelp logo

akshaydudhane16 / bipnet Goto Github PK

View Code? Open in Web Editor NEW
131.0 7.0 17.0 1.04 MB

[CVPR 2022--Oral, Best paper Finalist] Burst Image Restoration and Enhancement. SOTA for Burst Super-resolution, Low-light Burst Image Enhancement, Burst Image De-noising

Python 100.00%
pytorch low-light-image-enhancement image-restoration burst-processing burst-superresolution burst-denoising cvpr22 low-level-vision

bipnet's People

Contributors

akshaydudhane16 avatar swz30 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bipnet's Issues

BursrSR real dataset training

Thank you for your wonderful work!

I'm trying to train with your code, and on the synthetic dataset, BipNet shows excellent results.
But on the real burst dataset, It does not give as good results as described in your paper.
Could you give me some advice?

I'm really looking forward to your answer.

No training code for denoting

Thanks fro sharing good paper.

I would like to understand with python code for burst denoising but there was no training codes.

Do you have plan to release denoising training code?

Thanks.

About testing in low-light enhancement

Hello,
Thanks for your great work, will you release your testing code in low-light enhancement? And I have some problems in this task. (1) what's the final output of your network? A raw image without post-process or an sRGB image like Karadeniz et al. do. (2) how do you evaluate metrics, in raw domain or after postprocessing?

Model class methods need to be overrided caused by Pytorch Lightning

Thanks for your work.
I'm trying to reproduce the training result for super-resolution track 1.
However, after installing the environment by install.yml, I cannot run the training code by
python BIPNet_Track_1_training.py.

The error appears as follows.

"No `training_step()` method defined. Lightning `Trainer` expects as minimum a"
pytorch_lightning.utilities.exceptions.MisconfigurationException: No `training_step()` method defined. Lightning `Trainer` expects as minimum a `training_step()`, `train_dataloader()` and `configure_optimizers()` to be defined.

I'm fixing it by adding training_step, and configure_optimizers functions in BIPNet and setting Adam optimizer with lr 1e-4 and Cosine annealing scheduler as the paper states. Is it the correct way to reproduce the result?

Loss nan for BurstSR Track 2 training

When I train the real-world burst SR benchmark using Track_2_evaluation.py, the value of loss becomes nan.

I did not modify any of the source code from the released version.

loss_nan_track2_bipnet

Do you know how to fix this issue?

About training setting of real world BurstSR dataset

Thank you for your wonderful work!

I'm trying to train with your code, and on the synthetic dataset, BipNet shows excellent results.
But on the real burst dataset, It does not give as good results as described in your paper.
The loss continues to increase except at the beginning.
your code set 1e-5 ~ 1e-6 with CosineAnnealingLR as a default setting.
If this learning rate is different from the setting of the experiment in the paper, could you share the training settings for the real burst dataset?

I'm really looking forward to your answer.

The Results on Track2 of Burst SR Cannot be Reproduced

Hi, I used the provided checkpoint for real burst SR and run Track_2_evaluation.py. The reproduced result is much lower than the reported one in your paper.

image

As shown in the figure, the reproduced PSNR is 47.42, while the reported PSNR in the paper is 48.63. Is there any problem?

Thanks.

Question for SR burst training

Thanks for sharing great work.

Currently, I'm trying to train your model(BIPNet) that you provided and my model to compare the performance difference.

I used Zurich-RAW-to-DSLR-Dataset that you commented and used your code. Changed points in BIPNet were precision and deterministic in pytorch lightning.

Even though the trained epoch was 35 epochs, validation PSNR value was under 42dB.(Not test result from evaluation code)

Could you let me know when the validation PSNR became over 42dB?

Thanks.
Luis Kang.

About batch size in training

Thank you for your great work!

I have some question about batch size.
In the paper, there is no mention about batch size in experiment section.
In your code, however, you set batch size as 1. Is there some reason why you set the batch size as 1?
Because training time is too long and GPU memory usage is quite low...

I'm really looking forward to your answer.

The problem of test code and PSNR in Grayscale dataset

Dear akshaydudhane16,

First of all, thanks for your reply. I understand there is a part of Grayscale_denoising_testing.py that calculates PSNR. However, the psnr calculated is before DenoisingPostProcess, while the psnr calculated is after DenoisingPostProcess in other methods, like MFIR, BPN.

Through the evaluation method of BPN, MFIR, I can actually only get the psnr(38.83, 36.07, 33.05, 29.44, 34.35), which is different from the psnr(41.26, 38.74, 35.91, 31.35, 36.81) in your paper. For fair, BPN, MFIR, BIPNET are all about getting the image and then evaluating it.

So I have a little bit of a problem with that, I'm looking forward to your answers.

Thanks,

The pre-trained model gets no paper points

Sorry to bother you, but I have a question. I have tried to use the MFIR test code to evalute the psnr of BIPNET about grayscale denoising. However, I can only get 38.79 dB with gain 1 different from 41.26 dB in your paper. So I want to know what the problem is? Thanks, looking forward to your reply.

The results in Grayscale dataset are 3dB less than the PSNR in paper

Dear akshaydudhane16,

    I'm so sorry to bother you with a question. Through the MFIR evaluation code, I got the test results of your pretrained model on the grayscale dataset, which is very different from the results(41.26, 38.74, 35.91, 31.35, 36.81) in your paper. I would be most grateful if you could reply to me. 

Thanks,

<style> </style>
1 2 4 8 average
38.83 36.07 33.05 29.44 34.35

New Super-Resolution Benchmarks

Hello,

MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

If you are interested in participating, you can add your algorithm following the submission steps:

We would be grateful for your feedback on our work!

About the code for the network

Thanks for your excellent work for burst SR. I have a question about the code for the burst feature alignment part. In your code, four deformable convolutions are set, the input for 'self.deform3' is 'burst_feat' while others are 'feat'. Whether it is a special design or just by mistake?

Color denoising metrics

Dear authors,

I got the following results on color denoising dataset:

Gain 1: 39.90
Gain 2: 37.66
Gain 4: 35.05
Gain 8: 32.49

which is totally different from the provided table with post-processing:
1 - 40.58
2 - 38.13
3 - 35.30
4 - 32.87

I used your checkpoint and cropped borders from each side (4px)

Seems like I`m doing something wrong.

Overfitting on the training set of BurstSR

Hi, thanks for your wonderful work. I am also working on BurstSR and your codes have helped me a lot.

While I have some problems when I try to fine-tune an SR model on the BurstSR dataset. I firstly train the SR model on the Synthetic dataset and everything is OK. Then I fine-tune this model on the BurstSR dataset. The training loss keeps decreasing while the validation PSNR only grows at the beginning and drops gradually after it achieves the best result. It seems that the training set is overfitted.

Have you met the same problem? I would appreciate it a lot if you could help me figure this out.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.