GithubHelp home page GithubHelp logo

Comments (16)

art-programmer avatar art-programmer commented on July 26, 2024

We are processing that. I will keep you updated. And thank you for the pointer!

from planercnn.

yangengt123 avatar yangengt123 commented on July 26, 2024

Hi, I was trying to run the training code, and I found in

with open(self.dataFolder + '/scannetv2-labels.combined.tsv') as info_file:

it asked a .tsv file named scannetv2-labels.combined.tsv is needed, and in
if not os.path.exists(scenePath + '/' + scene_id + '.txt') or not os.path.exists(scenePath + '/annotation/planes.npy'):

a .txt file named with the scene_id seems required to be under each scene's directory.

May I ask where I can get access to these files?

from planercnn.

art-programmer avatar art-programmer commented on July 26, 2024

They are from the original ScanNet dataset. You need to download the data and uncompress the .sens file (maybe delete the .sens file after uncompression to save space).

from planercnn.

yangengt123 avatar yangengt123 commented on July 26, 2024

Thanks.
Just for someone else who is not familiar with ScanNet dataset, to get thees files, please use the argument --type .sens .txt and --label_map during download.

By the way, how can I run the training with multiple gpu? I cannot find any line contain the function torch.nn.DataParallel(model).cuda(), which I usually use for multi-gpu training. I tried to uncommment

# Uncomment to train on 8 GPUs (default is 1)

but it seems does not work as well.

from planercnn.

dreamPoet avatar dreamPoet commented on July 26, 2024

Sorry but is there any way we can run ScanNet original sens reader.py for all sens files scattered in different scene files? It looks like I cannot move all .sens file into one directory as it is important to map to corresponding annotations in later merging operation.

from planercnn.

yangengt123 avatar yangengt123 commented on July 26, 2024

Hi @dreamPoet , I just simply set the --filename as the path/to/Scannet_data/scans/, and it seems work well for me.

from planercnn.

dreamPoet avatar dreamPoet commented on July 26, 2024

Hi @yangengt123, do you simply use --filename path/to/scans/ ? I get the error Is a directory: '../../ScanNet/scans/'. If I use ../../ScanNet/scans/*/*.sens, it gives error unrecognized arguments: scans/scene0000_01/scene0000_01.sens...

from planercnn.

yangengt123 avatar yangengt123 commented on July 26, 2024

Yeah, the only difference is I used absolute path, something like /hdd/Scannet_data/scans/, and I did not meet any trouble to parse it. But I think it is always possible to look into the SensorData.py to check which path the load function is using.

from planercnn.

dreamPoet avatar dreamPoet commented on July 26, 2024

Thanks.
Just for someone else who is not familiar with ScanNet dataset, to get thees files, please use the argument --type .sens .txt and --label_map during download.

By the way, how can I run the training with multiple gpu? I cannot find any line contain the function torch.nn.DataParallel(model).cuda(), which I usually use for multi-gpu training. I tried to uncommment

# Uncomment to train on 8 GPUs (default is 1)

but it seems does not work as well.

Thank you, but have you solve the problem of multi-GPU training now? I meet the same problem.

from planercnn.

dreamPoet avatar dreamPoet commented on July 26, 2024

By the way I use srun with gpu:8 and -n8 as bash command.

from planercnn.

dreamPoet avatar dreamPoet commented on July 26, 2024

And there are three parameters related to batch size, one in config, one in option, and one for dataset loader...Frankly, I am a bit of confused...

from planercnn.

wullish avatar wullish commented on July 26, 2024

@dreamPoet did you solve the .sens files extraction problem using reader.py? I'm in the exactly same problem. Any idea?

Best.

from planercnn.

wullish avatar wullish commented on July 26, 2024

Well, I could solve it using python 2.7 but calling one .sens file per once. And dont forget about the reader.py options arguments such as --export_depth_images, --export_color_images, etc.

from planercnn.

yangengt123 avatar yangengt123 commented on July 26, 2024

That is right, I use python2.7. Maybe this is the reason I did not meet any problem.

By the way, may I ask if you can still download the data from ScanNet recently? I tried to download the data to a new server but it seems my download script cannot work.

from planercnn.

wullish avatar wullish commented on July 26, 2024

Ah thanks.
Yes, I've downloaded the whole data last week. Maybe they've changed the urls. I think you can contact [email protected] and ask them about the script.

from planercnn.

yangengt123 avatar yangengt123 commented on July 26, 2024

yeah, maybe I should. Thanks.
I will close this issue now.
Please let me know if any pre-processed data become available.

from planercnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.