GithubHelp home page GithubHelp logo

deblur-nerf's People

Contributors

limacv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deblur-nerf's Issues

Dataset issues

Hello dear author!
I am a beginner, now doing image deblurring experiments, want to use your dataset, I checked the dataset link above your homepage, but real_camear_motion_blur and real_object_motion_blur these two datasets do not seem to have a corresponding clear dataset, the deep learning deblurring method I use should be a clear dataset as a label, can the author provide the clear dataset corresponding to these two datasets? Thank you so much!

急切请求真实物体运动和相机运动的清晰数据集

亲爱的作者您好!
我是一名初学者,现在正做图像去模糊的实验,想用一下您的数据集,我查看了您主页上面的数据集链接,但是real_camear_motion_blur和real_object_motion_blur这两个数据集好像没有对应的清晰数据集,我所使用的深度学习去模糊方法要清晰数据集作为标签,请问作者能提供一下这两个数据集所对应的清晰数据集吗?真的万分感谢!

demo脚本

有大佬可以提供demo脚本吗?只是想使用一下模型,不想训练

evaluation的问题

亲爱的作者大大原谅我的菜,evaluation的时候是直接将render_only和render_test设置为True吗

About Synthetic Defous Blur Data

The paper reads, 'For defocus blur, we use the built-in functionality to render depth-of-field images. We fix the aperture and randomly choose a focus plane between the nearest and the furthest depth'.
Can I get a specified aperture size for each synthetic data?
All of them use the same aperture size?

Colmap preprocessing

Hi

I wonder how to generate the camera pose for blurry inputs (e.g. blurcozy2room scene).

When I run the colmap with 34 images in blurcozy2room directory, error raise.

ERROR: the correct camera poses for current points cannot be accessed

However, when I run the same code for clean image set, it works.

I guess colmap seems to be having trouble getting it to work correctly.

Could you share how to generate the camera poses for blurry inputs?

Thank you !

关于blender文件

您好

我利用您所提供的blender文件,运行export_blur生成了每个模糊视角下的10张清晰的图片

我将10张图片合并求平均来合成模糊的图片,请问是这样的操作是对的吗
image

关于代码中参数问题

您好,请问代码中有两个参数,optim_trans和optim_spatialvariant_trans。这两个参数都是表示源点的优化,有什么区别呢?或者是为什么设置两个参数呢?

Which KPAC model is used for evaluation?

I noticed that there are several versions of KPAC.

https://github.com/HyeongseokSon1/KPAC
# Our 2-level model 
CUDA_VISIBLE_DEVICES=0 python main_eval_2level.py

# Our 3-level model 
CUDA_VISIBLE_DEVICES=0 python main_eval_3level.py

# Our dual pixel-based model
CUDA_VISIBLE_DEVICES=0 python main_eval_dual.py

Which model is used for the baseline model?

License of your dataset [IMPORTANT]

Hi @limacv,

If I want to use your dataset in my thesis, of course, I will cite your paper and dataset, but what kind of license or permission do I require from you to do it?

PSNR is not the same

Hello, really thank you very much for checking out my question in your busy schedule. In the paper, the PSNR of the BlurFactor dataset is 25.60, but the PSNR I got when I trained with the BlurFactor dataset turned out to be 36, why is it lower in the paper? Is it a problem with my training?

Question about dataset

Hi, thanks for your amazing work.
I plan to build my own scenes (free camera trajectory) in blender. I wonder if you build these models by yourself. If not, where can I find such good quality 3d models such as outdoorpool and tanabata model? If so, is there any tutorials of blender suit for newbee like me :)
Looking forward to your early reply.

Baselining the scenes

Hi, thanks for the great work!

I came across your project and found it very interesting and useful. I've done some baselining to make sure i understand the code correctly, here are some results in my reproduced PSNR and (reported PSNR in the paper).

perf

Everything looks good and within the range of errors other than the Factory scene. Do you have any idea why this may be?

Again, great work!
Best,

Evaluation with trained model weight

I really appreciate to your awesome work!

I wonder do you have any specific evaluation code with trained model weight?

Could you share the code if you have?

关于运行自己的数据集出问题

平台

  • ubuntu20、 NVIDIA A6000

情况

使用自己的数据集,利用llff生成poses_bounds.npy

'''
jon@jon-amax:~/Documents/ForVSLAM/LLFF$ python3 imgs2poses.py --scenedir /home/jon/Documents/ForVSLAM/Deblur-NeRF/data/d435i/
Post-colmap
Cameras 5
Images # 136
Points (38546, 3) Visibility (38546, 136)
Depth stats 0.06706099206246206 163.93914317128616 17.18839939417317
Done with imgs2poses
'''

然后运行nerf:

'''
jon@jon-amax:~/Documents/ForVSLAM/Deblur-NeRF$ python3 run_nerf.py --config configs/demo_blurball.txt --num_gpu 1
Loaded image data (720, 1280, 3, 136) [ 720. 1280. 228.8815496]
Loaded /home/jon/Documents/ForVSLAM/Deblur-NeRF/data/d435i 0.6598865999459549 129.31827083877957
recentered (3, 5)
[[ 1.0000000e+00 1.8971052e-08 1.5343165e-08 -4.0671404e-07]
[-1.8971052e-08 1.0000000e+00 -8.3013401e-09 -1.7443124e-07]
[-1.5343165e-08 8.3013401e-09 1.0000000e+00 3.6814633e-08]]
Data:
(136, 3, 5) (136, 720, 1280, 3) (136, 2)
HOLDOUT view is 109
Loaded llff (136, 720, 1280, 3) (120, 3, 5) [ 720. 1280. 228.88155] /home/jon/Documents/ForVSLAM/Deblur-NeRF/data/d435i
LLFF holdout, 5
DEFINING BOUNDS
NEAR FAR 0.0 1.0
Found ckpts []
train on image sequence of len = 136, 1280x720
get rays
shuffle rays
done
Begin
TRAIN views are [ 1 2 3 4 6 7 8 9 11 12 13 14 16 17 18 19 21 22
23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 43 44
46 47 48 49 51 52 53 54 56 57 58 59 61 62 63 64 66 67
68 69 71 72 73 74 76 77 78 79 81 82 83 84 86 87 88 89
91 92 93 94 96 97 98 99 101 102 103 104 106 107 108 109 111 112
113 114 116 117 118 119 121 122 123 124 126 127 128 129 131 132 133 134]
TEST views are [ 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
90 95 100 105 110 115 120 125 130 135]
VAL views are [ 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
90 95 100 105 110 115 120 125 130 135]
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
'''

报错如上,暂时原因未知:

'''
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
'''

关于360°视角

我复现时发现您的工作似乎难以重建出有360°输入的场景(您给的数据没有有大视角改变的场景)。请问是方法本身的问题还是我数据的问题?如果是我数据的问题能请您提供准确的数据吗?

Groundtruth for Synthetic Data

Hi!

I came across this great work and am wondering if there is groundtruth for the synthetic data?

Best regards,
Cheng

[Question / Feature Request] Comparisons with more recent NeRFs?

Great work! I am curious if the authors have ever compared the Deblur-NeRF with more recent NeRFs (e.g., MipNeRF, MipNeRF360) where motion blur is handled implicitly (by rendering conical frustums instead of individual rays)? Some clarifications would be preferred.

real scene defocus dataset issue

Hello,

I wonder the real scene defocus dataset.

For sausage scene, I think there are lack of consistency for train data and test data.

This lead to remarkable performance drop at that scene, especially.

Therefore, I think sausage scene is excluded finally.

In addition, the number of real scene defocus dataset is 11.

As you mentioned in your paper, there should be 10 camera motion blur and 10 defocus blur dataset for real scene.

If you don't mind, could you let me know which scene is excluded in real scene defocus dataset?

Thank you

直接运行会报错:TypeError: PillowPlugin.read() got an unexpected keyword argument 'ignoregamma'

User
/usr/local/lib/python3.10/dist-packages/torch/init.py:614: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:451.)
_C._set_default_tensor_type(t)
Traceback (most recent call last):
File "/content/Deblur-NeRF/run_nerf.py", line 650, in
train()
File "/content/Deblur-NeRF/run_nerf.py", line 228, in train
images, poses, bds, render_poses, i_test = load_llff_data(args, args.datadir, args.factor,
File "/content/Deblur-NeRF/load_llff.py", line 242, in load_llff_data
poses, bds, imgs = _load_data(basedir, factor=factor) # factor=8 downsamples original imgs by 8x
File "/content/Deblur-NeRF/load_llff.py", line 102, in _load_data
imgs = imgs = [imread(f)[..., :3] / 255. for f in imgfiles]
File "/content/Deblur-NeRF/load_llff.py", line 102, in
imgs = imgs = [imread(f)[..., :3] / 255. for f in imgfiles]
File "/content/Deblur-NeRF/load_llff.py", line 98, in imread
return imageio.imread(f, ignoregamma=True)
File "/usr/local/lib/python3.10/dist-packages/imageio/init.py", line 97, in imread
return imread_v2(uri, format=format, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/imageio/v2.py", line 360, in imread
result = file.read(index=0, **kwargs)
TypeError: PillowPlugin.read() got an unexpected keyword argument 'ignoregamma'

问题可能出在 load_llff.py 文件中的 imread 函数调用上。根据错误信息中的提示,似乎是 PillowPlugin.read() 函数不支持 ignoregamma 这个关键字参数。

这可能是因为你使用了一个不兼容的参数,导致了 PillowPlugin.read() 函数调用失败。在这种情况下,你可以尝试去除 ignoregamma=True 参数,然后再次运行代码。

修改 load_llff.py 文件中的 imread 函数调用,将 ignoregamma=True 参数移除

Would you like to provide weights for real defocus scene?

Hi teams, thank you for sharing your great work!

I want to render some real defocus scenes but unfortunately I cannot train the model with provided batch size because of small gpu memory budget.

So I simply reduced batch size to 512 but the performance degraded (both qualitatively and quantitatively)

So would you like to provide the weight for whole real defocus scenes please? Currently, I can find the only scene 'bush' weight among what you uploaded. Or would you like to let me know which hyperparameters, other than batch size (N_rand), to control to get good result with smaller batch size?

Thanks!

PSNR SSIM not aligned with paper

Hi, for the synthetic defocus blur dataset, I use your pretrained results to calculate PSNR and SSIM. But my results are way lower than the results in the paper. Do you really use the synthetic_gt folder for GT evaluation? I use synthetic_gt for GT and evaluate defocusfactory1_full_woprior_wodepth/renderonly_test_199999 on 000/008/016/024/032 for factory scene.

The function visualize_pose missing

Dear author,

Thanks for your wonderful work first! I am trying to visualization of the proposed method but found that the function visualiza_pose is missing. Could you please update the function?

Best,
Beilei

论文对比实验疑问

亲爱的作者,我看到您论文中对比实验MPR+NERF,PVD+NERF,我想知道MPR模型是否有重新训练?谢谢

急求:关于参数的问题

作者您好,想咨询一下关于代码中问题,在代码class DSKnet中设置的 optim_trans=False, optim_spatialvariant_trans=False,第一个参数是在对应的4.3节,第二个参数也是对光线原点的优化,论文中有所介绍,但是代码中为什么设置成FALSE?

One blender file missing in the provided OneDrive

Dear author,

Thanks a lot for the great job! It seems that there are only four blender files in the provided OneDrive, and the blender file of the 'wine' scene is missing. Could you please provide the blender file of the 'wine' scene?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.