GithubHelp home page GithubHelp logo

deep-burst-sr's People

Contributors

goutamgmb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

deep-burst-sr's Issues

Can't download the PWC-Net

Hi @goutamgmb I can't download the pretrained pwc-net using the install.sh and the 'gdown' command, and I also can't open the google drive link you provide. Could you please give some advice?

File can't run

Hello, is your local.py file missing?
Can you provide a new file?

view raw images

hi thanks for your nice work, can you please help me in knowing as to which part of code to be used to view generated synthetic raw images and how to view it

How to avoid checkerboard artifacts?

Hi, thanks for the excellent work. When I tried to test your pretrained model on my captured RAW bursts, I prepared my testing datasets as your evaluation datasets —— RAW bursts captured from Samsung Galaxy S8, 4 channels 'png' format and uint16 type. However, I found that there exists checkerboard artifacts in the output super resolution image. How to avoid these checkerboard artifacts?

The input sample and the output image are shown in the https://drive.google.com/drive/folders/15ya_CuvVZ65gjBwXqfjTd1pgqi3da8cO?usp=sharing

pwcnet fine-tune

Has the PWC-Net been fine-tuned for DBSR in "pwcnet-network-default.pth"?

Overfitting on the training dataset

Thanks for your wonderful work. I have some problems when I try to fine-tune DBSR on the BurstSR dataset for a burst size of 14.

I firstly train it on the synthetic dataset and it finally achieves 40.95 dB on the synthetic validation dataset. Then I try to fine-tune it on the BurstSR dataset. While the validation result only grows at the beginning but drops gradually after it achieves 47.86 dB. It seems that the training set of BurstSR is overfitted.

Have you met the same problem? Is it because the training set is too small or there is a domain gap between the training and validation set? I have been confused by this problem for months. I would appreciate it a lot if you could help me figure this out.

Thanks.

local.py file missing

Update environment settings
The environment setting file admin/local.py contains the paths for pre-trained networks, datasets etc. Update the paths in local.py according to your local environment.

The admin/local.py file is missing. Is this file still required?

How to crop a BurstSR Dataset with any size from BurstSR dataset (full)

Hi, @goutamgmb ! Thanks a lot for your amazing work in DBSR.

Now I want to train a Burst-SR Model which could adapt real-world image data with larger size, like 512512, instead of 160160. So how can I crop a BurSR Dataset with any size from BurstSR dataset (full).

If possible, could you please share your codes used to obtain pre-processed version of the BurstSR dataset that contains roughly aligned crops from the original images. If not, could you share more details about cropping and alignment to help me reproduce this part of work.

Thank you again!

Is there something specific about the test set in synthetic burst dataset?

After downloading the test dataset from the official website, there is actually 22G of the data.
Is the 1204 images selected for the validation set specific? Is that randomly chosen or others?

If there is no test dataset for val, here should have some changes:
from:

    if split in ['train', 'test']:
          self.img_pth = os.path.join(root, split, 'canon')

to:

        if split in ['train', 'test']:
            self.img_pth = os.path.join(root, 'train', 'canon')

Am I right?

NotImplementedError in PWCNet

Hi @goutamgmb when training the model on BurstSR dataset by using the command 'python run_training.py dbsr default_realworld', the errors are as belows. I guess it's caused by PWCNet module. And I have checked the original PWCNet git and the Pytorch-PWC git, but I didn't find similar problems. Could you please provide some advice?

Traceback (most recent call last):
File "/media/data4/syj/DBSR/deep-burst-sr/trainers/base_trainer.py", line 69, in train
self.train_epoch()
File "/media/data4/syj/DBSR/deep-burst-sr/trainers/simple_trainer.py", line 95, in train_epoch
self.cycle_dataset(loader)
File "/media/data4/syj/DBSR/deep-burst-sr/trainers/simple_trainer.py", line 75, in cycle_dataset
loss, stats = self.actor(data)
File "/media/data4/syj/DBSR/deep-burst-sr/actors/dbsr_actors.py", line 72, in call
pred, aux_dict = self.net(burst)
File "/media/data4/syj/media/data4/syj/envs/torch_burstsr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data4/syj/DBSR/deep-burst-sr/models/dbsr/dbsrnet.py", line 34, in forward
out_enc = self.encoder(im)
File "/media/data4/syj/media/data4/syj/envs/torch_burstsr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data4/syj/DBSR/deep-burst-sr/models/dbsr/encoders.py", line 61, in forward
offsets = self.alignment_net(x_oth.view(-1, *x_oth.shape[-3:]), x_ref.view(-1, *x_ref.shape[-3:]))
File "/media/data4/syj/media/data4/syj/envs/torch_burstsr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data4/syj/DBSR/deep-burst-sr/models/alignment/pwcnet.py", line 273, in forward
flow = self.net(target_img_re, source_img_re)
File "/media/data4/syj/media/data4/syj/envs/torch_burstsr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data4/syj/DBSR/deep-burst-sr/models/alignment/pwcnet.py", line 225, in forward
objEstimate = self.netSix(tenFirst[-1], tenSecond[-1], None)
File "/media/data4/syj/media/data4/syj/envs/torch_burstsr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data4/syj/DBSR/deep-burst-sr/models/alignment/pwcnet.py", line 161, in forward
tenVolume = torch.nn.functional.leaky_relu(input=correlation.FunctionCorrelation(tenFirst=tenFirst, tenSecond=tenSecond), negative_slope=0.1, inplace=False)
File "/media/data4/syj/DBSR/deep-burst-sr/external/pwcnet/correlation/correlation.py", line 386, in FunctionCorrelation
return _FunctionCorrelation.apply(tenFirst, tenSecond)
File "/media/data4/syj/DBSR/deep-burst-sr/external/pwcnet/correlation/correlation.py", line 325, in forward
raise NotImplementedError()
NotImplementedError

Quick Start Guide

This looks like a powerful group of functions. Do you have a quick start tutorial or code example where someone can supply a set of images and see the results? I am having a hard time grasping where to start will all this great information here.

BurstSR dataset path

I am not able to locate the BurstSR dataset path, the function for the download requires a path.

How to crop raw images in training dataset?

Hi, @goutamgmb ! Thanks a lot for your wonderful work in DBSR. In the BurstSR Dataset, you mention that the cropped images in training dataset are generated from the burst images. I guess it means that you use the original raw image to crop into the resized training raw images. So my question is, how to crop the RAW image?

Again, thanks a lot for your excellent job! Look forward to your reply :)

MAGMA not initialised during training

Hi,

when I train the dbsr model on the default_realworld setting I get the following error during training:

Error in magma getdevice arch: MAGMA not initialized (call magma init() first) or bad device
First, I thought this was due to multiprocessing start method, but it is already set to spawn.

Any Ideas?

Best,
Matthias

Some training data in burstsr_crops are faulty

Hi!

I use the crops from the BurstSR dataset to some task related to computational photography and I realized that some HR images from the training samples where flat, and thus totally different from the companion LR frames. The IDs of these samples are

  • 0004_0006
  • 0026_0002
  • 0130_0041
  • 0168_0018

For reproducing the SRx4 results it seems OK to keep these four samples (out of 5405!), but for other tasks like mine it is a great deal to get rid of these.

Hope it may help!

how are the training images of Canon dataset generated?

Can you please tell me, how have you taken the training crops of size 448x448? Have you used cv2.resize to resize the images, as some of the images seem to be blurred to some extent?
Where in the code, have you implemented this part of cropping?

PSNR changes after CanonImage.generate_processed_image

Hi, @goutamgmb I train the model using the burstsr dataset, and then I use the provided compute_score.py to compute the psnr between net_pred_warped_m and gt. The computed PSNR is around 46dB.

for m, m_fn in metrics_all.items():
metric_value = m_fn(net_pred_warped_m, gt, valid=valid).cpu().item()
scores[m].append(metric_value)

And then I use the provided visualize_result.py to see the SR result. But I find the SR results seem to be kind of blurry. So I calculate the PSNR result between the pred_proc_np and frame_gt_np. And the PSNR decreases to around 29 dB.

pred_proc_np = CanonImage.generate_processed_image(pred[0].cpu(), meta_info_burst, return_np=True,
gamma=True,
smoothstep=True,
no_white_balance=False,
external_norm_factor=None)

My guess is the Canon.generate_process_image operation somehow influences the visulization. So I wonder could you please give me some advice?

Ground truth bayer data size is 640x640x3?

When I load canon bayer data(im_raw.png), it has size 640x640x3
If I want to transfer it to bayer size (EX: NxNx1), should I just sample the data point in 640x640x3?

The checkpoints can not be loaded

It seems that you have saved the structure of this directory into the checkpoints. However, the directory has been changed. The "local" module in the admin directory has been removed. Consequently, the checkpoints can not be properly loaded. Would you mind providing new checkpoints?

BurstSR frames visualisation

Hi @goutamgmb,

Is there any possibility to add or guide me how to add simililar method to visualize_synburst_val but for bursts from BurstSR dataset?
I have tried to do it following previous and visualize_results, but the results were not satisfying.
The steps I followed:

  1. load SamsungRAWImage
  2. convert RGGB to RGB
  3. use generate_processed_image method

I very much appreciate any help as I am not an expert in photography technology.

Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.