Comments (18)
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)?
They used CapturingReality to calibrate their scenes (reported in their paper), but I am not quite familiar with this field :)
Thanks! Also just to confirm, in order to get depth/normal maps on real scenes, did you rasterize with their provided mesh and poses? Scenes from Free-viewpoint does not provide depth maps or normal maps; they only provide meshes and poses.
Depth map can be directly acquired from the MVS tools, or a rasterization is also OK I think I think evaluating normal maps using monocular learning-based methods (like MonoSDF or NeuRIS does) is more precise than the normal from MVS, the latter contains lots of noise
Yeah I agree. Just want to confirm which option was used in real scene experiments in I^2-SDF? Did you use semi-dense MVS depth by feeding images into a MVS pipeline, or rasterized depth/normals with the provided mesh?
W.r.t. to monocular depth (e.g. DPT depth in MonoSDF), I don't see the current code of I^2-SDF supporting scale/shift-invariant depth loss. I will try out DPT depth/normals with bubble loss but I am not sure whether things will just automatically work out if I plug in DPT depth/normals, or changes need to be made to the losses.
I current haven't tested I^2-SDF on monocular depths. All depth maps I used in my experiments are absolute depths. By the way, I will release the real data I used these two days maybe.
For monocular depths with bubble loss, one thing I worry about is that the thin structures (e.g. chandeliers) in the monocular depth may only seem visually correct instead of scale correct. With a scale shifting (e.g. via least square method), the projected point cloud from the monocular depth may result in an erroneous area. But after all you can try it first.
from i2-sdf.
Thanks for your reply!
The D(r) and N(r) are ground truth depth and normal maps. As described in "dataset" paragraph of Sec.7, we test our method on our synthetic dataset and some real dataset. For synthetic dataset, the ground truth depth and normal maps are readily available from the renderer, while for real dataset, ground truths are predicted similar to MonoSDF.
As for MonoSDF, we share the same source of depth and normal maps as ours in supervision to ensure fairness.
We will make our synthetic dataset publicly available soon. See our supplementary material for dataset details.
from i2-sdf.
Thanks. Additional questions.
- what about geometry reconstruction performance on scannet? The paper only provides comparison on synthetic data.
- does i2sdf perform the same procedure as monosdf to solve the scale and shift when using public model depth as supervision.?
from i2-sdf.
- As described in Sec.7, scannet suffers from inaccurate camera calibration, erroneous depth capture and low image quality (such as motion blur), which will crucially affect the reconstruction quality. Instead, we evaluate our method on other real-world scenes, which has a higher quality than scannet. Our synthetic dataset also aims to provide a higher quality indoor scene reconstruction benchmark.
- For ground truth depth maps with correct scales, scaling and shifting is no longer required.
from i2-sdf.
Thanks.
- any quantitative evaluatiom on other real datasets? just curious how false positives on depth maps impact the bubble loss design.
from i2-sdf.
Empirically, bubble loss has robustness against false positives to some extent, because of the smooth step in training (Sec.6). In our ablation studies, we add noises to our depth maps to simulate false positives.
However, in contrast, bubble loss will indeed be impacted by false negatives. For example, if the depth map provided from dataset misses a chair leg, our method may struggle in reconstructing the chair leg. We leave this as a future work.
from i2-sdf.
yes, thats why we want to see the impact of depth map in real dataset. Though the abalation study has synthetic noise on depth maps, it is hard to compare with noise introduced by offshelf depth estimation models. Ground truth depth is not available in practice.
from i2-sdf.
btw, how to generate the reconstruction result in fig 1? just want to learn to generate textured meshes.
from i2-sdf.
- In real-world scenarios, with appropriate calibration techniques, the estimated depth maps are still sufficient for reconstruction. For example, Fig.4 displays a real-world scene where our method succeeds in reconstructing the lamp pole. The key to precise reconstruction is the elimination of false negatives instead of false positives.
- The reconstruction result in Fig.1 is not a textured mesh but a neural rendering result. I'll replace the legend to resolve misunderstanding. However, since our method also decomposes material parameters, I believe that a textured mesh can be generated by attaching the predicted albedo to each vertex during the marching cube process.
from i2-sdf.
Close this issue due to inactivity. Re-open it if you have further questions.
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?
Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.
That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)?
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)?
They used CapturingReality to calibrate their scenes (reported in their paper), but I am not quite familiar with this field :)
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)?
They used CapturingReality to calibrate their scenes (reported in their paper), but I am not quite familiar with this field :)
Thanks! Also just to confirm, in order to get depth/normal maps on real scenes, did you rasterize with their provided mesh and poses? Scenes from Free-viewpoint do not include depth maps or normal maps; they only provide meshes and poses.
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)?
They used CapturingReality to calibrate their scenes (reported in their paper), but I am not quite familiar with this field :)
Thanks! Also just to confirm, in order to get depth/normal maps on real scenes, did you rasterize with their provided mesh and poses? Scenes from Free-viewpoint does not provide depth maps or normal maps; they only provide meshes and poses.
Depth map can be directly acquired from the MVS tools, or a rasterization is also OK I think
I think evaluating normal maps using monocular learning-based methods (like MonoSDF or NeuRIS does) is more precise than the normal from MVS, the latter contains lots of noise
from i2-sdf.
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss?
Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)?Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting.
The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository.That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)?
They used CapturingReality to calibrate their scenes (reported in their paper), but I am not quite familiar with this field :)
Thanks! Also just to confirm, in order to get depth/normal maps on real scenes, did you rasterize with their provided mesh and poses? Scenes from Free-viewpoint does not provide depth maps or normal maps; they only provide meshes and poses.
Depth map can be directly acquired from the MVS tools, or a rasterization is also OK I think I think evaluating normal maps using monocular learning-based methods (like MonoSDF or NeuRIS does) is more precise than the normal from MVS, the latter contains lots of noise
Yeah I agree. Just want to confirm which option was used in real scene experiments in I^2-SDF? Did you use semi-dense MVS depth by feeding images into a MVS pipeline, or rasterized depth/normals with the provided mesh?
W.r.t. to monocular depth (e.g. DPT depth in MonoSDF), I don't see the current code of I^2-SDF supporting scale/shift-invariant depth loss. I will try out DPT depth/normals with bubble loss but I am not sure whether things will just automatically work out if I plug in DPT depth/normals, or changes need to be made to the losses.
from i2-sdf.
Hi, i was confused about the statement that the bubble loss breaks the stable status of the converged SDF field so far. Why can't we merge the bubble step and smooth step into one step?
from i2-sdf.
Related Issues (13)
- When the code will be released? HOT 2
- Hi, when will the code be released? HOT 3
- Material labels for released dataset? HOT 6
- Artifacts with depth maps? HOT 3
- Missing config file HOT 2
- conf.plot.grid_boundary not defined HOT 1
- Dataset camera intrinsics HOT 1
- Code release for intrinsic decomposition
- Code release for scene editing HOT 1
- Pretrained weights
- Inconsistent normal losses
- Some weired floater in the extracted mesh
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from i2-sdf.