GithubHelp home page GithubHelp logo

360bev's People

Contributors

brucetend avatar elnino9ykl avatar jamycheung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

brucetend

360bev's Issues

Clarification on the Calculation of masks_inliers and m in BEV360_segnext_s2d3d.py

Hello,

Firstly, I'd like to thank you for sharing this interesting codebase. I have been reviewing the BEV360_segnext_s2d3d.py script and have a couple of questions I hope you could clarify for me:

Could you please explain how the masks_inliers variable is derived in the code? I'd like to understand the logic behind it.

Regarding the variable m, it seems to be derived in a rather unusual way. The threshold_index_m is set as the maximum value of proj_index. As a result, the boolean mask m will mark all elements in proj_index that are smaller than this maximum value as True. The only exception, where m would be False, is when an element in proj_index equals threshold_index_m. Could you please elaborate on the reasoning behind this approach?

I appreciate your time and assistance in advance. Looking forward to your response.

Best regards

Snipaste_2023-09-08_14-44-32

Some Question About Dataset

Dear Authors:
When I downloaded the 360-FV dataset, I had found it didn't include label file and it had 42 numbers in .png. Besides, The dataset includes "self.df = pd.read_csv("eigen13_mapping_from_mpcat40.csv")", it is hard to find the .csv for me. Would you mind sending it to me?

Bests.

Some questions about the 360FV-Matterport dataset

Dear authors,

thanks for your excellent work! I have some questions about the 360FV-Matterport dataset. It would be great if you can help me out.

  1. The original Matterport dataset has 90 scenes in total, but I can only find train61+val7+test18=total86 scenes in 360BEV paper. Where are the rest 4 scenes? Do I miss something?
  2. I cannot figure out the train/val/test subset division of the 360FV-Matterport dataset. Would you mind giving me some suggestions on the subset division?

Thank you in advance for your help.

Best,
Junwei

Query Regarding Pre-trained mit_b2.pth Checkpoint

I noticed that the project makes use of a checkpoint file named mit_b2.pth. Could you please provide some information about its origin? Specifically:

Where was this checkpoint file downloaded from?
Is it a pre-trained model that does not require further training to be used?
Thank you for your time and effort on this project. Looking forward to your response.

360BEV-Matterport Dataset

Hello,
The download from Google Drive doesn't contain this folder: 360BEV-Matterport/valid/smnet_training_data_zteng.

image

Is this a folder I need to create myself?
Looking forward to your reply! Thanks!

How to test?

Your work is very good, but I don't know how to organize the data you provide in Google Drive into a suitable format and test it

Index doesn't seem to be used

In the encoder section of your code, I noticed that the index doesn't appear to be utilized. Instead, you seem to recompute the corresponding panorama index based on the mask height. Given that you've already calculated smnet_training_data_maxHIndices, why isn't this index used directly instead of recalculating it?
image

360FV-Matterport

Dear authors,
Hello, I greatly admire your impactful and innovative work: 360BEV:Panoramic Semantic Mapping for Indoor Bird’s-Eye View
When I read your excellent work, I saw:
"The original Matterport3D [5] was collected via narrow-FoV cameras. As shown in Fig. 3, we convert the 18 narrow-view images and annotations into the 360° format by using rotation-translation matrices."
Could you please provide the process code of acquiring Matterport3D panoramic segmentation image?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.