jingsenzhu / i2-sdf Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2023] I^2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs
License: MIT License
[CVPR 2023] I^2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs
License: MIT License
Any possibility to provide pretrained model or weights
Hi,
Great work. I have questions regarding equation 9) in the paper.
What are the depth/normal supervision ? specifically, what are D(r) and N(r) in 11) and 12) respectively.
Thanks
Hi,
Great work and thanks for sharing! I wanted to test your code after your last update, however, I can't find an example of a config file or documentation for it. Is this something you could provide?
Thanks for the paper!
When are you planning on releasing the code for scene editing?
Thank you for your excellent work, I'm wondering when will the code for intrinsic decomposition be released.
Hi,
Thanks for sharing this work. Does the dataset contain any camera intinsics information, such as focal length / fov, optical center, etc? It seems like cameras.npz
contains only the camera to world transformation matrix.
Thanks!
conf.plot.grid_boundary
(code here) is not defined in any of the provided config files. Shall I set it to grid_boundary = [-1.1, 1.1]
?
Hi, when will the code be released?
I wanted to let you know about an inconsistency in the code.
The code seems to contain two normal losses:
get_normal_l1_loss
, which actually computes an angular loss, as pointed out in Equation 12 of your paper. This name seems confusing, as it does not compute the L1 normal loss.get_normal_angular_loss
, which actually computes the truncated scaled angle.However, the loss computation in forward
uses loss 1 above twice (see this line), weighted by (self.normal_weight + self.angular_weight)
, and loss 2 above is not used at all.
Hello. Thank you for sharing this amazing project!
May I know when the code will be released?
I am looking forward to running this project!
Hi there, thanks for releasing the two demo scenes! I wonder if there is a plan to include materials and emitter mask in the current/later version (as indicated here), which will be useful for inverse rendering tasks.
Thanks!
Thanks for sharing the great work!
But when I ran the mesh extract for scan 31 and scan 127 I got very weired look, like the following:
The main part of the scan looks great, but there still some floater or redundant plane. Do you have any idea about this?
The environment information is:
tinycudann 1.7
torch 1.13.1+cu117
Ubuntu 22.04.2 LTS
NVIDIA 4090
Driver Version: 525.116.04
Hi there I noticed some potential issues with a few depth maps, where pixels that belong to a open window have large depth values, instead of 0. Here is the 0003 image of the released bedroom scene.
Note the yellow strip in the center figure, which are inside the white area of the 'window hole'; the pixels have large depth values but ideally should be infinite (0) because they correspond to the outdoor env. I pick two locations (red dot and green dot) on the figures, and query the depths, getting 4.3840003 3.1920002 respectively, but the former one should be 0.
As a result, if you simply back-project depth to 3D, you get fantom geometry (in red circle).
Not sure if this is an issue with the way I read the depth file (cv2.imread(filename, -1)), or the depth file itself.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.