GithubHelp home page GithubHelp logo

vlar-group / raydf Goto Github PK

View Code? Open in Web Editor NEW
102.0 3.0 4.0 174.88 MB

🔥RayDF in PyTorch (NeurIPS 2023)

License: Other

Python 98.27% Shell 1.73%
3d 3d-graphics 3d-reconstruction 3d-rendering 3d-representation-learning light-rays neurips neurips-2023

raydf's Issues

Encountering the issue when run this code "CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego --eval_only --grad_normal"

hi, when I follow you tip to evaluate the ray-surface distance network through this code "CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego --eval_only --grad_normal". I encounter this error:
//
Traceback (most recent call last):
File "run_mv.py", line 405, in
evaluate(args)
File "run_mv.py", line 331, in evaluate
depth = convert_d(vis_results['dist_abs'].squeeze().cpu().numpy(), dataloader.scene_info, out='dep')
KeyError: 'dist_abs'
/
/

why could this thing happen?
1700839361127

Time cost of training

Hello, respect for your great work first!
I run CUDA_VISIBLE_DEVICES=0 python run_cls.py --config configs/blender_cls.txt --scene lego which cost nearly 3h with RTX4090 24G.
Is this time consumption normal?

About Unit Sphere used in get_multiview_rays

Hello, I would like to understand the approach of the get_multiview_rays function when running run_mv.py.
Overall, it seems that the scene is confined within a large sphere (for example, the sphere limit radius for the ScanNet dataset is set to 9 in the config), and the RayDF uses the points hit by the training rays as the center to draw a unit circle. Multi-view rays are then obtained by random sampling on this unit sphere, and a magnificent multi-view consistency depth loss is used at the end. I'm not sure if my understanding is correct?
Additionally, I have some questions about the setting of the unit circle size. For instance, in the config for the Blender dataset, it shows radius=1.5. What if the training rays hit a point near the edge, and then a unit circle is drawn with this point as the center? Wouldn't this exceed the boundary set by the radius? I wonder what impact this might have on the training, especially when customizing a dataset. If the bounding sphere of the dataset is particularly small, would using a unit circle as the center affect the training? Thank you!
By the way I drew a simple graph to illustrate my frustration. Sorry for the bad hand writing and drawing.( ̄﹏ ̄)

1fa5198b94ac1695867733c4ed6d65a

Unit of chamfer distance?

Sorry for paper question here (it's even a beginner question). Was CD calculated in your paper experiment share the same unit (centimeter) as ADE?

Appretiate with your attention!

How to get the visible results?

Hello!
I have finished running sh run.sh 0 lego and wandb shows successful. Are there any visible results I can get like rgb、depth or mesh?

Why using the dual-ray visibility classifier?

Hi, thanks for your great work.
Why using the dual-ray visibility classifier to train the ray-surface distance network(mv), since you have calculated all the dual-ray visibility results to train the classifier.

Strange result of Mesh

I got a strange result of mesh by pretrained classifier (blender-lego_d8w512ext1pw0.1_lr0.0001bs2048iters50k/0049999.tar) and run_mv --config configs/blender.txt --scene lego --eval_only --denoise. When I open the obj file tsdf_mesh_test, it shows a sphere mesh as follow:
image
However, the ouput depth pictures are right I think.
Here is the detail output:
[train] ADE=1.2083 CD=976.5395 CD_median=862.2973 PSNR=nan SSIM=nan LPIPS=nan
[test] ADE=1.1905 CD=948.3585 CD_median=829.0297 PSNR=nan SSIM=nan LPIPS=nan
[train] ADE=1.1883 CD=930.2852 CD_median=807.2603 PSNR=nan SSIM=nan LPIPS=nan
[test] ADE=1.1760 CD=915.7480 CD_median=792.2218 PSNR=nan SSIM=nan LPIPS=nan
[train] ADE=1.1926 CD=898.6947 CD_median=770.4690 PSNR=nan SSIM=nan LPIPS=nan
[test] ADE=1.1712 CD=880.8589 CD_median=770.1740 PSNR=nan SSIM=nan LPIPS=nan
[train] ADE=1.1887 CD=906.7648 CD_median=788.7728 PSNR=nan SSIM=nan LPIPS=nan
[test] ADE=1.1785 CD=900.8463 CD_median=780.1147 PSNR=nan SSIM=nan LPIPS=nan
[train] ADE=1.2278 CD=1017.9339 CD_median=888.1369 PSNR=nan SSIM=nan LPIPS=nan
[test] ADE=1.2101 CD=999.7015 CD_median=880.9478 PSNR=nan SSIM=nan LPIPS=nan

How to measure the efficiency of raydf

In table 1 of RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency,it only takes 0.019 s to render an 800* 800 depth image on 3090,but it takes me 1.4s to render an 800*800 depth image of a chair on v100 gpu.Did I do anything wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.