vzyrianov / lidargen Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation of "Learning to Generate Realistic LiDAR Point Clouds" (ECCV 2022)
Home Page: https://www.zyrianov.org/lidargen/
License: MIT License
Official implementation of "Learning to Generate Realistic LiDAR Point Clouds" (ECCV 2022)
Home Page: https://www.zyrianov.org/lidargen/
License: MIT License
Could you please provide the configs and the pretrained model for nuScenes dataset? I note that experiments on nuScenes have been conducted. Thanks for your attention.
Hello and thanks for sharing this great work! I am quite new with the LiDAR technology so please excuse my possibly naive question. After sampling, the output is 2-channel images. How can I recreate the pointcloudfrom them?
Thanks for the codes! I would like to implement your evaluation protocol for KITTI-360 experiments. According to your paper, the test set comprises 30,758 frames from the first two sequences.
We split the dataset into two parts, where the first two sequences (30,758 frames) are the testing set, and the rest are used for training and cross-validation.
And in this repository a dataset class parses 0000_sync
and 0001_sync
filenames.
lidargen/LiDARGen/datasets/kitti.py
Lines 17 to 20 in 4c3226c
However, since KITTI-360 did not release 0001_sync
, the dataset class just parses 0000_sync
(11,518 frames).
I think the number 30,758 may be from the actual serial sequences 0000_sync
(11,518) and 0002_sync
(19,240).
Which is correct for evaluation, 11,518 or 30,758 frames?
Thanks for sharing your code! While I have some questions.
How did you manage to train the model on A4000(I suppose 16G)? The bach size on "/workspace/third/lidargen/kitti.yml" is 24. I can only set to batch=4 on 3090GPU (24G). Did I miss something?
BTW, I trained kitti with batch=4, but got unstatisfying results.
Hello, I have a question, when I run the command "python lidargen.py --sample --exp kitti_pretrained --config kitti.yml", there is an error: No such file or directory: 'kitti_pretrained/image_samples/images/samples.pth', i know that in lidargen.py, the command "python LiDARGen/main.py" was executed, but after that, there are files named 'samples_0.pth~samples_1160.pth', but i can't find samples.pth in 'kitti_pretrained/image_samples/images/', how can i solve this problem?
Hello, I have two questions to ask for advice.
Hi,
Thanks a lot for opensourcing the code.
I was triying to use your code for a differnet dataset.
I wanted to know that with your configuration of dataset of size: 2,64,1024 and ~50k samples for KITTI and ~ 297,737 for nuscenes.
how much time does it take to train the model for the 5,00,000 epochs.
I wanted to ask this because I have only 2048 samples 1 epoch is taking 10 minutes, and by that calculation it would take around ~3000 days to train.
Am I missing something , or is the training procedure something differnet, or you downsample the dataset.
Please let me know.
It is of huge importance to me as I am using your model as a major part of my research.
Eagerly awaiting your response.
Thank you.
Hi,
Thanks for opensourcing the code.
I was wondering whether one oculd traing a model with a difernet dataset/similar to KITTI from scratch. Could you provide with the training details and the files to run with the arguments for training from scratch. It would be very helpful.
Thanks
Thank you for sharing your work. I have a question about the FRD evaluation.
The paper suggests that:
We choose RangeNet++, which is a encoder-decoder based network for segmentation pretrained on KITTI-360. To trade-off between quality and and preserve locality, we randomly choose 4,096 activation from the feature map of its bottleneck layer to fit the Gaussian distribution.
Meanwhile, the code seems to extract features from the penultimate layer pretrained on SemanticKITTI.
The README also mentions differences of the score from the paper.
Is this the recommended and proper version? Or plan to release the KITTI-360 weights?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.