limacv / deblur-nerf Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Hello dear author!
I am a beginner, now doing image deblurring experiments, want to use your dataset, I checked the dataset link above your homepage, but real_camear_motion_blur and real_object_motion_blur these two datasets do not seem to have a corresponding clear dataset, the deep learning deblurring method I use should be a clear dataset as a label, can the author provide the clear dataset corresponding to these two datasets? Thank you so much!
Hi, remarkable work here! I'm wonderring if the lpips results provided in the paper are computed by AlexNet?
亲爱的作者您好!
我是一名初学者,现在正做图像去模糊的实验,想用一下您的数据集,我查看了您主页上面的数据集链接,但是real_camear_motion_blur和real_object_motion_blur这两个数据集好像没有对应的清晰数据集,我所使用的深度学习去模糊方法要清晰数据集作为标签,请问作者能提供一下这两个数据集所对应的清晰数据集吗?真的万分感谢!
有大佬可以提供demo脚本吗?只是想使用一下模型,不想训练
亲爱的作者大大原谅我的菜,evaluation的时候是直接将render_only和render_test设置为True吗
你好,请问一下为什么config文件对于每个场景的render_rmnearplane参数都有区别,是经过调参选择效果最好的吗?
The paper reads, 'For defocus blur, we use the built-in functionality to render depth-of-field images. We fix the aperture and randomly choose a focus plane between the nearest and the furthest depth'.
Can I get a specified aperture size for each synthetic data?
All of them use the same aperture size?
Hi
I wonder how to generate the camera pose for blurry inputs (e.g. blurcozy2room scene).
When I run the colmap with 34 images in blurcozy2room directory, error raise.
ERROR: the correct camera poses for current points cannot be accessed
However, when I run the same code for clean image set, it works.
I guess colmap seems to be having trouble getting it to work correctly.
Could you share how to generate the camera poses for blurry inputs?
Thank you !
您好,请问代码中有两个参数,optim_trans和optim_spatialvariant_trans。这两个参数都是表示源点的优化,有什么区别呢?或者是为什么设置两个参数呢?
I noticed that there are several versions of KPAC.
https://github.com/HyeongseokSon1/KPAC
# Our 2-level model
CUDA_VISIBLE_DEVICES=0 python main_eval_2level.py
# Our 3-level model
CUDA_VISIBLE_DEVICES=0 python main_eval_3level.py
# Our dual pixel-based model
CUDA_VISIBLE_DEVICES=0 python main_eval_dual.py
Which model is used for the baseline model?
Hi @limacv,
If I want to use your dataset in my thesis, of course, I will cite your paper and dataset, but what kind of license or permission do I require from you to do it?
Hello, really thank you very much for checking out my question in your busy schedule. In the paper, the PSNR of the BlurFactor dataset is 25.60, but the PSNR I got when I trained with the BlurFactor dataset turned out to be 36, why is it lower in the paper? Is it a problem with my training?
Hi, thanks for your amazing work.
I plan to build my own scenes (free camera trajectory) in blender. I wonder if you build these models by yourself. If not, where can I find such good quality 3d models such as outdoorpool and tanabata model? If so, is there any tutorials of blender suit for newbee like me :)
Looking forward to your early reply.
Hi, thanks for the great work!
I came across your project and found it very interesting and useful. I've done some baselining to make sure i understand the code correctly, here are some results in my reproduced PSNR and (reported PSNR in the paper).
Everything looks good and within the range of errors other than the Factory scene. Do you have any idea why this may be?
Again, great work!
Best,
Hi @limacv,
This is amazing work.
I am using Nvidia RTX 3060 GPU (Memory=VRAM 12 GB) to train the algorithm.
After 1200 iterations, the code throws this error
I was wondering how to set the N_rand
value to get the output?
Thanks
Adwait
I really appreciate to your awesome work!
I wonder do you have any specific evaluation code with trained model weight?
Could you share the code if you have?
'''
jon@jon-amax:~/Documents/ForVSLAM/LLFF$ python3 imgs2poses.py --scenedir /home/jon/Documents/ForVSLAM/Deblur-NeRF/data/d435i/
Post-colmap
Cameras 5
Images # 136
Points (38546, 3) Visibility (38546, 136)
Depth stats 0.06706099206246206 163.93914317128616 17.18839939417317
Done with imgs2poses
'''
'''
jon@jon-amax:~/Documents/ForVSLAM/Deblur-NeRF$ python3 run_nerf.py --config configs/demo_blurball.txt --num_gpu 1
Loaded image data (720, 1280, 3, 136) [ 720. 1280. 228.8815496]
Loaded /home/jon/Documents/ForVSLAM/Deblur-NeRF/data/d435i 0.6598865999459549 129.31827083877957
recentered (3, 5)
[[ 1.0000000e+00 1.8971052e-08 1.5343165e-08 -4.0671404e-07]
[-1.8971052e-08 1.0000000e+00 -8.3013401e-09 -1.7443124e-07]
[-1.5343165e-08 8.3013401e-09 1.0000000e+00 3.6814633e-08]]
Data:
(136, 3, 5) (136, 720, 1280, 3) (136, 2)
HOLDOUT view is 109
Loaded llff (136, 720, 1280, 3) (120, 3, 5) [ 720. 1280. 228.88155] /home/jon/Documents/ForVSLAM/Deblur-NeRF/data/d435i
LLFF holdout, 5
DEFINING BOUNDS
NEAR FAR 0.0 1.0
Found ckpts []
train on image sequence of len = 136, 1280x720
get rays
shuffle rays
done
Begin
TRAIN views are [ 1 2 3 4 6 7 8 9 11 12 13 14 16 17 18 19 21 22
23 24 26 27 28 29 31 32 33 34 36 37 38 39 41 42 43 44
46 47 48 49 51 52 53 54 56 57 58 59 61 62 63 64 66 67
68 69 71 72 73 74 76 77 78 79 81 82 83 84 86 87 88 89
91 92 93 94 96 97 98 99 101 102 103 104 106 107 108 109 111 112
113 114 116 117 118 119 121 122 123 124 126 127 128 129 131 132 133 134]
TEST views are [ 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
90 95 100 105 110 115 120 125 130 135]
VAL views are [ 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
90 95 100 105 110 115 120 125 130 135]
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
'''
'''
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
! [Numerical Error] density_map contains nan.
! [Numerical Error] raw contains nan.
! [Numerical Error] rgb_map contains nan.
! [Numerical Error] depth_map contains nan.
! [Numerical Error] acc_map contains nan.
'''
我复现时发现您的工作似乎难以重建出有360°输入的场景(您给的数据没有有大视角改变的场景)。请问是方法本身的问题还是我数据的问题?如果是我数据的问题能请您提供准确的数据吗?
Hi!
I came across this great work and am wondering if there is groundtruth for the synthetic data?
Best regards,
Cheng
Could you share the ground truth mesh file or blender file of synthetic scene dataset??
if use 9 kernel in 33 pattern or 25 kernels in 55pattern,will get better effects?
Warning: data is not aligned! This can lead to a speedloss 。How can this warning be resolved? Will it affect the results of reproduction?
Will this method work if my data has both blurry and clear images?
Hi,
Is the blurwine scene the same as Trolley scene in the paper?
Thank you.
Great work! I am curious if the authors have ever compared the Deblur-NeRF with more recent NeRFs (e.g., MipNeRF, MipNeRF360) where motion blur is handled implicitly (by rendering conical frustums instead of individual rays)? Some clarifications would be preferred.
Hello,
I wonder the real scene defocus dataset.
For sausage scene, I think there are lack of consistency for train data and test data.
This lead to remarkable performance drop at that scene, especially.
Therefore, I think sausage scene is excluded finally.
In addition, the number of real scene defocus dataset is 11.
As you mentioned in your paper, there should be 10 camera motion blur and 10 defocus blur dataset for real scene.
If you don't mind, could you let me know which scene is excluded in real scene defocus dataset?
Thank you
Warning: data is not aligned! This can lead to a speedloss 。How can this warning be resolved? Will it affect the results of reproduction?
User
/usr/local/lib/python3.10/dist-packages/torch/init.py:614: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:451.)
_C._set_default_tensor_type(t)
Traceback (most recent call last):
File "/content/Deblur-NeRF/run_nerf.py", line 650, in
train()
File "/content/Deblur-NeRF/run_nerf.py", line 228, in train
images, poses, bds, render_poses, i_test = load_llff_data(args, args.datadir, args.factor,
File "/content/Deblur-NeRF/load_llff.py", line 242, in load_llff_data
poses, bds, imgs = _load_data(basedir, factor=factor) # factor=8 downsamples original imgs by 8x
File "/content/Deblur-NeRF/load_llff.py", line 102, in _load_data
imgs = imgs = [imread(f)[..., :3] / 255. for f in imgfiles]
File "/content/Deblur-NeRF/load_llff.py", line 102, in
imgs = imgs = [imread(f)[..., :3] / 255. for f in imgfiles]
File "/content/Deblur-NeRF/load_llff.py", line 98, in imread
return imageio.imread(f, ignoregamma=True)
File "/usr/local/lib/python3.10/dist-packages/imageio/init.py", line 97, in imread
return imread_v2(uri, format=format, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/imageio/v2.py", line 360, in imread
result = file.read(index=0, **kwargs)
TypeError: PillowPlugin.read() got an unexpected keyword argument 'ignoregamma'
问题可能出在 load_llff.py 文件中的 imread 函数调用上。根据错误信息中的提示,似乎是 PillowPlugin.read() 函数不支持 ignoregamma 这个关键字参数。
这可能是因为你使用了一个不兼容的参数,导致了 PillowPlugin.read() 函数调用失败。在这种情况下,你可以尝试去除 ignoregamma=True 参数,然后再次运行代码。
修改 load_llff.py 文件中的 imread 函数调用,将 ignoregamma=True 参数移除
Hi teams, thank you for sharing your great work!
I want to render some real defocus scenes but unfortunately I cannot train the model with provided batch size because of small gpu memory budget.
So I simply reduced batch size to 512 but the performance degraded (both qualitatively and quantitatively)
So would you like to provide the weight for whole real defocus scenes please? Currently, I can find the only scene 'bush' weight among what you uploaded. Or would you like to let me know which hyperparameters, other than batch size (N_rand), to control to get good result with smaller batch size?
Thanks!
Hi, for the synthetic defocus blur dataset, I use your pretrained results to calculate PSNR and SSIM. But my results are way lower than the results in the paper. Do you really use the synthetic_gt folder for GT evaluation? I use synthetic_gt for GT and evaluate defocusfactory1_full_woprior_wodepth/renderonly_test_199999 on 000/008/016/024/032 for factory scene.
I downloaded the synthetic defocus blur dataset, but I didn't find the data sample that is named 'Trolley'. Is it actually the data sampled named 'wine' in the dataset?
Dear author,
Thanks for your wonderful work first! I am trying to visualization of the proposed method but found that the function visualiza_pose is missing. Could you please update the function?
Best,
Beilei
亲爱的作者,我看到您论文中对比实验MPR+NERF,PVD+NERF,我想知道MPR模型是否有重新训练?谢谢
Thanks for this work.
Can you please provide the configs for the provided data?
作者您好,想咨询一下关于代码中问题,在代码class DSKnet中设置的 optim_trans=False, optim_spatialvariant_trans=False,第一个参数是在对应的4.3节,第二个参数也是对光线原点的优化,论文中有所介绍,但是代码中为什么设置成FALSE?
Hi @limacv,
I trained your algorithm on this LLFF synthetic dataset. please find the video below for your reference.
I have a query, what was your approach for deriving the DSK or deformable space kernel.
Were there any relevant papers available on DSK?
thanks
Adwait
Dear author,
Thanks a lot for the great job! It seems that there are only four blender files in the provided OneDrive, and the blender file of the 'wine' scene is missing. Could you please provide the blender file of the 'wine' scene?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.