jamycheung / 360bev Goto Github PK
View Code? Open in Web Editor NEWRepository of 360BEV
License: Apache License 2.0
Repository of 360BEV
License: Apache License 2.0
Hello,
Firstly, I'd like to thank you for sharing this interesting codebase. I have been reviewing the BEV360_segnext_s2d3d.py script and have a couple of questions I hope you could clarify for me:
Could you please explain how the masks_inliers variable is derived in the code? I'd like to understand the logic behind it.
Regarding the variable m, it seems to be derived in a rather unusual way. The threshold_index_m is set as the maximum value of proj_index. As a result, the boolean mask m will mark all elements in proj_index that are smaller than this maximum value as True. The only exception, where m would be False, is when an element in proj_index equals threshold_index_m. Could you please elaborate on the reasoning behind this approach?
I appreciate your time and assistance in advance. Looking forward to your response.
Best regards
Dear Authors:
When I downloaded the 360-FV dataset, I had found it didn't include label file and it had 42 numbers in .png. Besides, The dataset includes "self.df = pd.read_csv("eigen13_mapping_from_mpcat40.csv")", it is hard to find the .csv for me. Would you mind sending it to me?
Bests.
Dear authors,
thanks for your excellent work! I have some questions about the 360FV-Matterport dataset. It would be great if you can help me out.
Thank you in advance for your help.
Best,
Junwei
I noticed that the project makes use of a checkpoint file named mit_b2.pth. Could you please provide some information about its origin? Specifically:
Where was this checkpoint file downloaded from?
Is it a pre-trained model that does not require further training to be used?
Thank you for your time and effort on this project. Looking forward to your response.
Your work is very good, but I don't know how to organize the data you provide in Google Drive into a suitable format and test it
In the encoder section of your code, I noticed that the index doesn't appear to be utilized. Instead, you seem to recompute the corresponding panorama index based on the mask height. Given that you've already calculated smnet_training_data_maxHIndices, why isn't this index used directly instead of recalculating it?
Dear authors,
Hello, I greatly admire your impactful and innovative work: 360BEV:Panoramic Semantic Mapping for Indoor Bird’s-Eye View
When I read your excellent work, I saw:
"The original Matterport3D [5] was collected via narrow-FoV cameras. As shown in Fig. 3, we convert the 18 narrow-view images and annotations into the 360° format by using rotation-translation matrices."
Could you please provide the process code of acquiring Matterport3D panoramic segmentation image?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.