GithubHelp home page GithubHelp logo

edwardzhou130 / polarseg Goto Github PK

View Code? Open in Web Editor NEW
367.0 14.0 80.0 242.91 MB

Implementation for PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation (CVPR 2020)

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%

polarseg's Introduction

PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation

LiDAR scan visualization of SemanticKITTI dataset(left) and the prediction result of PolarNet(right).

Official PyTorch implementation for online LiDAR scan segmentation neural network PolarNet (CVPR 2020).

PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation
Yang Zhang*; Zixiang Zhou*; Philip David; Xiangyu Yue; Zerong Xi; Boqing Gong; Hassan Foroosh
Conference on Computer Vision and Pattern Recognition, 2020
*Equal contribution

[ArXiv paper]

News

[2021-03] Our paper on panoptic segmentation task is accepted at CVPR 2021. Code is now available at Panoptic-PolarNet

What is PolarNet?

PolarNet is a lightweight neural network that aims to provide near-real-time online semantic segmentation for a single LiDAR scan. Unlike existing methods that require KNN to build a graph and/or 3D/graph convolution, we achieve fast inference speed by avoiding both of them. As shown below, we quantize points into grids using their polar coordinations. We then learn a fixed-length representation for each grid and feed them to a 2D neural network to produce point segmentation results.

We achieved leading mIoU performance in the following LiDAR scan datasets : SemanticKITTI, A2D2 and Paris-Lille-3D.

Model SemanticKITTI A2D2 Paris-Lille-3D
Squeezesegv2 39.7% 16.4% 36.9%
DarkNet53 49.9% 17.2% 40.0%
RangeNet++ 52.2% - -
RandLA 53.9% - -
3D-MiniNet 55.8% - -
Squeezesegv3 55.9% - -
PolarNet 57.2% 23.9% 53.3%

Prepare dataset and environment

This code is tested on Ubuntu 16.04 with Python 3.5, CUDA 9.2 and Pytorch 1.3.1.

1, Install the following dependencies by either pip install -r requirements.txt or manual installation.

2, Download Velodyne point clouds and label data in SemanticKITTI dataset here.

3, Extract everything into the same folder. The folder structure inside the zip files of label data matches the folder structure of the LiDAR point cloud data.

4, Data file structure should look like this:

./
├── train_SemanticKITTI.py
├── ...
└── data/
    ├──sequences
        ├── 00/           
        │   ├── velodyne/	# Unzip from KITTI Odometry Benchmark Velodyne point clouds.
        |   |	├── 000000.bin
        |   |	├── 000001.bin
        |   |	└── ...
        │   └── labels/ 	# Unzip from SemanticKITTI label data.
        |       ├── 000000.label
        |       ├── 000001.label
        |       └── ...
        ├── ...
        └── 21/
	    └── ...

Training

Run

python train_SemanticKITTI.py

to train a SemanticKITTI segmentation PolarNet from scratch after dataset preparation. The code will automatically train, validate and early stop training process.

Note that we trained our model on a single TITAN Xp which has 12 GB GPU memory. Training model on GPU with less memory would likely cause GPU out-of-memory. You will see the exception report if there is a OOM. In this case, you might want to train model with smaller quantization grid/ feature map via python train_SemanticKITTI.py --grid_size 320 240 32.

Evaluate our pretrained model

We also provide a pretrained SemanticKITTI PolarNet weight.

python test_pretrain_SemanticKITTI.py

Result will be stored in ./out folder. Test performance can be evaluated by uploading label results onto the SemanticKITTI competition website here.

Remember to shift label number back to the original dataset format before submitting! Instruction can be found in semantic-kitti-api repo.

python remap_semantic_labels.py -p </your result path> -s test --inverse

You should be able to reproduce the SemanticKITTI results reported in our paper.

Paris-Lille-3D and nuScenes datasets (New)

Download Paris-Lille-3D and nuScenes datasets and put it in the data folder like this:

data/
├──NuScenes/
|   ├── trainval/           
|   │   ├── lidarseg/	
|   |   ├── maps/
|   |   ├── ...
|   │   └── v1.0-trainval/ 
|   └── test/
|            └── ...
└──paris_lille/
    ├── coarse_classes.xml
    ├── Lille1.ply
    ├── ...
    └── Paris.ply

Training and evaluation code works similar to SemanticKITTI:

python train_nuscenes.py --visibility
python train_PL.py

Citation

Please cite our paper if this code benefits your research:

@InProceedings{Zhang_2020_CVPR,
author = {Zhang, Yang and Zhou, Zixiang and David, Philip and Yue, Xiangyu and Xi, Zerong and Gong, Boqing and Foroosh, Hassan},
title = {PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

polarseg's People

Contributors

edwardzhou130 avatar xyouz avatar yangzhang4065 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

polarseg's Issues

Error while running train_SemanticKITTI.py

Error log:

python train_SemanticKITTI.py
train_SemanticKITTI.py
Namespace(check_iter=4000, data_dir='data', grid_size=[480, 360, 32], model='polar', model_save_path='./SemKITTI_PolarSeg.pt', train_batch_size=2, val_batch_size=2)
0%| | 0/9565 [00:00<?, ?it/s]Traceback (most recent call last):
File "train_SemanticKITTI.py", line 197, in
main(args)
File "train_SemanticKITTI.py", line 107, in main
for i_iter,(,train_vox_label,train_grid,,train_pt_fea) in enumerate(train_dataset_loader):
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in process_data
data.reraise()
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils.py", line 428, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.python train_SemanticKITTI.py
train_SemanticKITTI.py
Namespace(check_iter=4000, data_dir='data', grid_size=[480, 360, 32], model='polar', model_save_path='./SemKITTI_PolarSeg.pt', train_batch_size=2, val_batch_size=2)
0%| | 0/9565 [00:00<?, ?it/s]Traceback (most recent call last):
File "train_SemanticKITTI.py", line 197, in
main(args)
File "train_SemanticKITTI.py", line 107, in main
for i_iter,(
,train_vox_label,train_grid,
,train_pt_fea) in enumerate(train_dataset_loader):
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/lidar/bhaskar/PolarSeg/dataloader/dataset.py", line 238, in getitem
processed_label = nb_process_label(np.copy(processed_label),label_voxel_pair)
TypeError: expected dtype object, got 'numpy.dtype[uint8]'

0%| | 0/9565 [00:01<?, ?it/s]

Original Traceback (most recent call last):
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/lidar/bhaskar/polarenv/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/lidar/bhaskar/PolarSeg/dataloader/dataset.py", line 238, in getitem
processed_label = nb_process_label(np.copy(processed_label),label_voxel_pair)
TypeError: expected dtype object, got 'numpy.dtype[uint8]'

0%| | 0/9565 [00:01<?, ?it/s]

Error while generating predictions

Hi,
I was trying to test the pretrained polar model

Once the test is completed on validation split ti throws below error while generating predictions

Namespace(data_dir='data', grid_size=[480, 360, 32], model='polar', model_save_path='pretrained_weight/SemKITTI_PolarSeg.pt', test_batch_size=1, test_output_path='out/SemKITTI_test')


Test network performance on validation split


100%|████████████████████████████████████████████████████████████████████████████████████████████████| 101/101 [00:28<00:00, 4.72it/s]Validation per class iou:
car : 94.93%
bicycle : 17.08%
motorcycle : 0.00%
truck : 0.00%
bus : 0.00%
person : 50.76%
bicyclist : 85.54%
motorcyclist : 0.00%
road : 96.99%
parking : 27.94%
sidewalk : 82.96%
other-ground : 0.00%
building : 66.53%
fence : 19.99%
vegetation : 75.00%
trunk : 57.21%
terrain : 69.88%
pole : 44.73%
traffic-sign : 38.37%
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 101/101 [00:28<00:00, 3.53it/s]
Current val miou is 43.575
Inference time per 1 is 0.0594 seconds


Generate predictions for test split


0it [00:00, ?it/s]Traceback (most recent call last):
File "test_pretrain.py", line 191, in
main(args)
File "test_pretrain.py", line 170, in main
del test_grid,test_pt_fea,test_index
UnboundLocalError: local variable 'test_grid' referenced before assignment
0it [00:00, ?it/s]

any changes to be made to generate predictions ?

Thanks in Advance

Train on a dataset and test on another

Hello, thank you very much for your great work ! I have a question: when we test the pre-trained model on a new database, should we modify the axes of the point cloud such that they are the same as the training database ? For example if I test on Nuscenes the pre-trained PolarSeg on Semantickitti, should I modify the axes of Nuscenes according those of SemanticKitti ?

Speed of code

Why is your code so disgusting? The training only supports single GPU, which is too slow. The inference takes time longer than 1 second on GTX 2080Ti, which is way conflict with what you state in paper.

Build Docker Image for PolarSeg

I'm trying to build a DockerFile to run this pre-trained network on OSX, however I encountered some problems intstalling torch_scatter and the problems seems to come from the Python Version. Can someone help me building the DockerFile correctly?

ARG cuda_version=9.0
ARG cudnn_version=7
#FROM nvidia/cuda:${cuda_version}-cudnn${cudnn_version}-devel
FROM nvidia/cuda:9.0-devel-ubuntu16.04
# Pin CuDNN 
RUN apt-get update && apt-get install -y --allow-downgrades --no-install-recommends \ 
    libcudnn7=7.0.5.15-1+cuda9.0 \
    libcudnn7-dev=7.0.5.15-1+cuda9.0
RUN apt-mark hold libcudnn7 libcudnn7-dev

# Supress warnings about missing front-end. As recommended at:
# http://stackoverflow.com/questions/22466255/is-it-possibe-to-answer-dialog-questions-when-installing-under-docker
ARG DEBIAN_FRONTEND=noninteractive

# Install system packages
RUN apt-get update && apt-get install -y --no-install-recommends \
      bzip2 \
      g++ \
      git \
      graphviz \
      libgl1-mesa-glx \
      libhdf5-dev \
      openmpi-bin \
      wget \
      curl \
      python3 \
      python3-pip \
      python3-venv \
      python3-dev \
      python3-setuptools \
      python3-tk



RUN apt-get update && apt-get install -y --no-install-recommends \
    apt-utils git curl vim unzip openssh-client wget \
    build-essential cmake \
    libopenblas-dev

RUN apt-get install -y --no-install-recommends \
    libjpeg8-dev libtiff5-dev libjasper-dev libpng12-dev \
    libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk2.0-dev \
    liblapacke-dev checkinstall

# Install Python packages and keras
ENV NB_USER app
ENV NB_UID 1000

RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
    mkdir -p /src && \
    mkdir -p /workspace && \
    chown $NB_USER /src && \
    chown $NB_USER /workspace

RUN rm -rf /var/lib/apt/lists/*

USER $NB_USER

WORKDIR /workspace

RUN pip3 install --upgrade "pip==20.3"

RUN pip3 install torch==1.5.0

RUN pip3 install \
    numpy \
    tqdm \
    pyyaml \
    Cython \
    scipy \
    dropblock \ 
    plyfile \
    llvmlite==0.31.0 \
    numba>=0.39.0 
    
    
#RUN pip3 install torch_scatter>=1.3.0

ENV PATH "/home/app/.local/bin:$PATH"
CMD jupyter notebook --port=8888 --ip=0.0.0.0

ERROR:

> [13/13] RUN pip3 install torch_scatter>=1.3.0:                                                                                                                                    
#16 0.955 WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.                                                                         
#16 0.955 Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.                                                                              
#16 0.955 To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.                                                                                 
#16 0.955 DEPRECATION: Python 3.5 reached the end of its life on September 13th, 2020. Please upgrade your Python as Python 3.5 is no longer maintained. pip 21.0 will drop support for Python 3.5 in January 2021. pip 21.0 will remove support for this functionality.
#16 1.552     ERROR: Command errored out with exit status 1:
#16 1.552      command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-qebgirgt/torch-scatter_f7452704942b4c9589c7aac19760b7c1/setup.py'"'"'; __file__='"'"'/tmp/pip-install-qebgirgt/torch-scatter_f7452704942b4c9589c7aac19760b7c1/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-b3adgb6a
#16 1.552          cwd: /tmp/pip-install-qebgirgt/torch-scatter_f7452704942b4c9589c7aac19760b7c1/
#16 1.552     Complete output (6 lines):
#16 1.552     Traceback (most recent call last):
#16 1.552       File "<string>", line 1, in <module>
#16 1.552       File "/tmp/pip-install-qebgirgt/torch-scatter_f7452704942b4c9589c7aac19760b7c1/setup.py", line 47
#16 1.552         sources = [main, osp.join(extensions_dir, 'cpu', f'{name}_cpu.cpp')]
#16 1.552                                                                          ^
#16 1.552     SyntaxError: invalid syntax
#16 1.552     ----------------------------------------
#16 1.553 ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
#16 1.573 WARNING: You are using pip version 20.3; however, version 20.3.4 is available.
#16 1.573 You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.
------
executor failed running [/bin/sh -c pip3 install torch_scatter>=1.3.0]: exit code: 1

Paris-Lille-3D

Hey,
thank you for the good work and for making this available to the community.
You mention in your paper that you are splitting the data set (Paris-Lille-3D). Is there also code available for this?
Thanks.

Inputs into PolarNet

PolarNet seems to take in a 9-dimensional feature vector as an input; could you clarify what these 9 features are?

Based on spherical_dataset in ./dataloader/dataset.py, my guesses are --

  • return_xyz: (3 features) [**not too sure what this is**]
  • xyz_pol: (3 features) original x, y and z coordinates converted to radius, angle and z, size [num_points, grid_size_r, grid_size_theta, grid_size_z]
  • xy: (2 features) original x and y coordinates, size [num_points, 3]
  • sig: (1 feature) reflectance, size [num_points, 1]

Are my guesses correct / wrong? Thanks!

running error when executing train.py

Hi, @edwardzhou130 ,

Thanks for releasing the package. I had followed the requirement.txt to install the required packages. However, when I run python train.py, I got the following error:

(pytorch1.3) root@milton-ThinkCentre-M93p:/data/code10/PolarSeg# python train.py 
/data/code10/PolarSeg/network/ptBEV.py:161: NumbaPerformanceWarning: np.dot() is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, A))
  pairwise_distance = sum_vec + np.transpose(sum_vec) - 2*np.dot(xyz, np.transpose(xyz))
Traceback (most recent call last):
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/targets/linalg.py", line 60, in ensure_blas
    import scipy.linalg.cython_blas
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/scipy/__init__.py", line 156, in <module>
    from . import fft
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/scipy/fft/__init__.py", line 81, in <module>
    from ._helper import next_fast_len
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/scipy/fft/_helper.py", line 4, in <module>
    from . import _pocketfft
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/scipy/fft/_pocketfft/__init__.py", line 3, in <module>
    from .basic import *
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/scipy/fft/_pocketfft/basic.py", line 8, in <module>
    from . import pypocketfft as pfft
ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/scipy/fft/_pocketfft/pypocketfft.cpython-36m-x86_64-linux-gnu.so)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/errors.py", line 717, in new_error_context
    yield
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 288, in lower_block
    self.lower_inst(inst)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 360, in lower_inst
    val = self.lower_assign(ty, inst)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 534, in lower_assign
    return self.lower_expr(ty, value)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 999, in lower_expr
    res = self.lower_call(resty, expr)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 791, in lower_call
    res = self._lower_call_normal(fnty, expr, signature)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 970, in _lower_call_normal
    res = impl(self.builder, argvals, self.loc)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/targets/base.py", line 1146, in __call__
    res = self._imp(self._context, builder, self._sig, args, loc=loc)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/targets/base.py", line 1176, in wrapper
    return fn(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/targets/linalg.py", line 528, in dot_2
    ensure_blas()
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/targets/linalg.py", line 62, in ensure_blas
    raise ImportError("scipy 0.16+ is required for linear algebra")
ImportError: scipy 0.16+ is required for linear algebra

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 13, in <module>
    from network.ptBEV import ptBEVnet
  File "/data/code10/PolarSeg/network/ptBEV.py", line 153, in <module>
    @nb.jit('b1[:](f4[:,:],i4)',nopython=True,cache=True)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/decorators.py", line 200, in wrapper
    disp.compile(sig)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler_lock.py", line 32, in _acquire_compile_lock
    return func(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/dispatcher.py", line 768, in compile
    cres = self._compiler.compile(args, return_type)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/dispatcher.py", line 77, in compile
    status, retval = self._compile_cached(args, return_type)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/dispatcher.py", line 91, in _compile_cached
    retval = self._compile_core(args, return_type)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/dispatcher.py", line 109, in _compile_core
    pipeline_class=self.pipeline_class)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler.py", line 551, in compile_extra
    return pipeline.compile_extra(func)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler.py", line 331, in compile_extra
    return self._compile_bytecode()
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler.py", line 393, in _compile_bytecode
    return self._compile_core()
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler.py", line 373, in _compile_core
    raise e
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler.py", line 364, in _compile_core
    pm.run(self.state)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler_machinery.py", line 347, in run
    raise patched_exception
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler_machinery.py", line 338, in run
    self._runPass(idx, pass_inst, state)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler_lock.py", line 32, in _acquire_compile_lock
    return func(*args, **kwargs)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler_machinery.py", line 302, in _runPass
    mutated |= check(pss.run_pass, internal_state)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/compiler_machinery.py", line 275, in check
    mangled = func(compiler_state)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/typed_passes.py", line 407, in run_pass
    NativeLowering().run_pass(state)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/typed_passes.py", line 349, in run_pass
    lower.lower()
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 195, in lower
    self.lower_normal_function(self.fndesc)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 248, in lower_normal_function
    entry_block_tail = self.lower_function_body()
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 273, in lower_function_body
    self.lower_block(block)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/lowering.py", line 288, in lower_block
    self.lower_inst(inst)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/errors.py", line 725, in new_error_context
    six.reraise(type(newerr), newerr, tb)
  File "/root/anaconda3/envs/pytorch1.3/lib/python3.6/site-packages/numba/six.py", line 669, in reraise
    raise value
numba.errors.LoweringError: Failed in nopython mode pipeline (step: nopython mode backend)
scipy 0.16+ is required for linear algebra

File "network/ptBEV.py", line 161:
def nb_greedy_FPS(xyz,K):
    <source elided>
        sum_vec[j,0] = np.sum(xyz_sq[j,:])
    pairwise_distance = sum_vec + np.transpose(sum_vec) - 2*np.dot(xyz, np.transpose(xyz))
    ^

[1] During: lowering "$92.15 = call $92.9(xyz, $92.14, func=$92.9, args=[Var(xyz, ptBEV.py:155), Var($92.14, ptBEV.py:161)], kws=(), vararg=None)" at /data/code10/PolarSeg/network/ptBEV.py (161)

It seems that my scipy is conflicted with numba. Could you give any hints on solving it? Also, please give corresponding version for the packages mentioned in requirements.txt

My installed packages are:
torch-scatter: 2.0.4
scipy: 1.4.1
numba: 0.47.0

THX!

inquiry meaning about " label - 1" , thanks!!

Dear author,
I got confused about func "SemKITTI2train_single" about "label-1".
what does the function want to do?

def SemKITTI2train(label):
if isinstance(label, list):
return [SemKITTI2train_single(a) for a in label]
else:
return SemKITTI2train_single(label)

def SemKITTI2train_single(label):
return label - 1 # uint8 trick

ONNX conversion: input tensor size

Hi,
when doing ONNX conversion for pytorch models , we need to create a dummy input variable , like (3,224,224) for Resnet50.
if need to do onnx conversion for polarseg, how shall we create this dummy input variable? what is the size of input tensor?

Thanks

std::bad_alloc / segmentation fault (core dumped)

I try to run train_SemanticKITTI.py and test_pretrain_SemanticKITTI.py.
How to fix this error?

terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)

or

Segmentation fault (core dumped)

nuScenes Pretrained Model Size Mismatch

Hi, authors! When I tried directly run test_pretrain_nuscenes.py, it told me that the loaded model didn't match the defined model.
Here are the command line logs:

test_pretrain_nuscenes.py
Namespace(data_dir='data', grid_size=[480, 360, 32], model='polar', model_save_path='pretrained_weight/Nuscenes_PolarSeg.pt', test_batch_size=1, test_output_path='out/Nuscenes/lidarseg/test', visibilty=False)
Traceback (most recent call last):
File "test_pretrain_nuscenes.py", line 196, in
main(args)
File "test_pretrain_nuscenes.py", line 75, in main
my_model.load_state_dict(torch.load(model_save_path))
File "/home/**/anaconda3/envs/polarseg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 847, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ptBEVnet:
size mismatch for BEV_model.network.inc.conv.0.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for BEV_model.network.inc.conv.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for BEV_model.network.inc.conv.0.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for BEV_model.network.inc.conv.0.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for BEV_model.network.inc.conv.1.conv1.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]).

ValueError: Input must be 1- or 2-d.

Hi!
I just do it 'python test_pretrain_SemanticKITTI.py' with KITTI Dataset, but the training is wrong!
I put two files 01 、04 under the data file.
What should i do?
thanks!

(pytorch) ziudarkin@ziudarkin-ubuntu:~/PolarSeg$ python test_pretrain_SemanticKITTI.py 
test_pretrain_SemanticKITTI.py
Namespace(data_dir='data', grid_size=[480, 360, 32], model='polar', model_save_path='pretrained_weight/SemKITTI_PolarSeg.pt', test_batch_size=1, test_output_path='out/SemKITTI_test')
********************************************************************************
Test network performance on validation split
********************************************************************************
0it [00:00, ?it/s]Traceback (most recent call last):
  File "test_pretrain_SemanticKITTI.py", line 185, in <module>
    main(args)
  File "test_pretrain_SemanticKITTI.py", line 122, in main
    iou = per_class_iu(sum(hist_list))
  File "test_pretrain_SemanticKITTI.py", line 26, in per_class_iu
    return np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist))
  File "<__array_function__ internals>", line 6, in diag
  File "/home/ziudarkin/anaconda3/envs/pytorch/lib/python3.7/site-packages/numpy/lib/twodim_base.py", line 303, in diag
    raise ValueError("Input must be 1- or 2-d.")
ValueError: Input must be 1- or 2-d.
0it [00:00, ?it/s]

Early stopping

Does train.py do early stopping as mentioned in README.md? I can't seem to find the relevant section of code in train.py where early stopping has been implemented.

train epoch

I want to know how many epochs you trained on the kitti dataset
I have trained 15 epochs,and best val miou is 47.082

Custom Training and Inference

@YangZhang4065 @edwardzhou130 Thanks for open sourcing the work , its of great work . I am having few queries
Q1. You have tested on the semattic kitti dataset , how will the output differ when we reduce the point cloud density of the scene to 80%-90% less
Q2.The inference you tested on which sys configuration
Q3 Can we train the polar seg for less number of class like 8 if so what al modification i have to do it for the code
Thanks in advance

Epoch issue

Hello.

I am running polarnet.
May I ask you how many epochs you have made in the pretrained model?

Thank you.

The problem of visualization on NuScenes dataset

Hi, authors:
Thanks a lot for your research. I have tried the "test_pretrain_nuscenes.py" and wanted to visualize the result. However, when I used the NuScenes API to show the result, the visualization was failed. My program and the errors are shown below:

%matplotlib inline

from nuscenes import NuScenes
import os

nusc = NuScenes(version='v1.0-test', dataroot='/home/hank/PolarSeg/data/NuScenes/test', verbose=True)
my_sample = nusc.sample[0]
sample_data_token = my_sample['data']['LIDAR_TOP']
my_predictions_bin_file = os.path.join('/home/hank/PolarSeg/out/Nuscenes/lidarseg/test', sample_data_token + '_lidarseg.bin')
print(my_predictions_bin_file)
nusc.render_sample_data(sample_data_token,
                        show_lidarseg=True,
                        with_anns=False,
                        show_lidarseg_legend=True,
                        lidarseg_preds_bin_path=my_predictions_bin_file)

ValueError                                Traceback (most recent call last)
<ipython-input-23-f9f2feb3df21> in <module>
      7                         with_anns=False,
      8                         show_lidarseg_legend=True,
----> 9                         lidarseg_preds_bin_path=my_predictions_bin_file)
     10 
     11 # my_sample = nusc.sample[11]

~/anaconda3/lib/python3.7/site-packages/nuscenes/nuscenes.py in render_sample_data(self, sample_data_token, with_anns, box_vis_level, axes_limit, ax, nsweeps, out_path, underlay_map, use_flat_vehicle_coordinates, show_lidarseg, show_lidarseg_legend, filter_lidarseg_labels, lidarseg_preds_bin_path, verbose)
    534                                          show_lidarseg_legend=show_lidarseg_legend,
    535                                          filter_lidarseg_labels=filter_lidarseg_labels,
--> 536                                          lidarseg_preds_bin_path=lidarseg_preds_bin_path, verbose=verbose)
    537 
    538     def render_annotation(self, sample_annotation_token: str, margin: float = 10, view: np.ndarray = np.eye(4),

~/anaconda3/lib/python3.7/site-packages/nuscenes/nuscenes.py in render_sample_data(self, sample_data_token, with_anns, box_vis_level, axes_limit, ax, nsweeps, out_path, underlay_map, use_flat_vehicle_coordinates, show_lidarseg, show_lidarseg_legend, filter_lidarseg_labels, lidarseg_preds_bin_path, verbose)
   1248             point_scale = 0.2 if sensor_modality == 'lidar' else 3.0
   1249 
-> 1250             scatter = ax.scatter(points[0, :], points[1, :], c=colors, s=point_scale)
   1251 
   1252             # Show velocities.

~/anaconda3/lib/python3.7/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
   1436     def inner(ax, *args, data=None, **kwargs):
   1437         if data is None:
-> 1438             return func(ax, *map(sanitize_sequence, args), **kwargs)
   1439 
   1440         bound = new_sig.bind(ax, *args, **kwargs)

~/anaconda3/lib/python3.7/site-packages/matplotlib/cbook/deprecation.py in wrapper(*inner_args, **inner_kwargs)
    409                          else deprecation_addendum,
    410                 **kwargs)
--> 411         return func(*inner_args, **inner_kwargs)
    412 
    413     return wrapper

~/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, verts, edgecolors, plotnonfinite, **kwargs)
   4451             self._parse_scatter_color_args(
   4452                 c, edgecolors, kwargs, x.size,
-> 4453                 get_next_color_func=self._get_patches_for_fill.get_next_color)
   4454 
   4455         if plotnonfinite and colors is None:

~/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py in _parse_scatter_color_args(c, edgecolors, kwargs, xsize, get_next_color_func)
   4305                     # NB: remember that a single color is also acceptable.
   4306                     # Besides *colors* will be an empty array if c == 'none'.
-> 4307                     raise invalid_shape_exception(len(colors), xsize)
   4308         else:
   4309             colors = None  # use cmap, norm after collection is created

ValueError: 'c' argument has 138880 elements, which is inconsistent with 'x' and 'y' with size 34720.

Try to visualize the lidar segmentation of trainval data

Besides, I can show the "v1.0-trainval" sample data successfully. The program and the result are shown below:

%matplotlib inline

from nuscenes import NuScenes
import os

nusc = NuScenes(version='v1.0-trainval', dataroot='/home/hank/PolarSeg/data/NuScenes/trainval', verbose=True)
my_sample = nusc.sample[0]
sample_data_token = my_sample['data']['LIDAR_TOP']
my_predictions_bin_file = os.path.join('/home/hank/PolarSeg/data/NuScenes/trainval/lidarseg/v1.0-trainval', sample_data_token + '_lidarseg.bin')
print(my_predictions_bin_file)
nusc.render_sample_data(sample_data_token,
                        show_lidarseg=True,
                        with_anns=False,
                        show_lidarseg_legend=True,
                        lidarseg_preds_bin_path=my_predictions_bin_file)

GPU Memory Occupation

I am currently adapting your code to my own project, but I found the GPU memory occupation is exceedingly large. I tested the memory usage of lovasz softmax, which showed it uses nearly 2000Mb with batch size 1. I don't think it is normal. Have you checked the reasons behind the weirdly large GPU memory usage? Could you point out which part of the network that occupies memory most besides lovasz softmax?

Visualizing result

Hi @edwardzhou130 @YangZhang4065

I have generated the predictions from pretrained model and used the https://github.com/PRBonn/semantic-kitti-api to visualize the result
IOU is as follows
class iou:
car : 90.11%
bicycle : 37.91%
motorcycle : 0.14%
truck : 0.00%
bus : 30.64%
person : 48.71%
bicyclist : 82.20%
motorcyclist : 0.00%
road : 96.21%
parking : 34.88%
sidewalk : 85.71%
other-ground : 0.12%
building : 92.32%
fence : 26.77%
vegetation : 81.96%
trunk : 69.04%
terrain : 76.29%
pole : 64.89%
traffic-sign : 28.24%
the visualized result is
000001

  1. Observed that the label colors are different in the image and vegetation is not detected or of different label, is there any other method to visualize the generated predictions
  2. What are the changes to be made to train for less no of classes

Thanks for your effort and replies.

Invalid shape during training (train_SemanticKITTI.py)

Hello, I received an invalid shape error, during training. Any suggestions? (#2 (comment))

Traceback (most recent call last):
File "train_SemanticKITTI.py", line 230, in
main(args)
File "train_SemanticKITTI.py", line 139, in main
predict_labels = my_model(val_pt_fea_ten, val_grid_ten)
File "/home/stud/j/johanb17/.conda/envs/Polarseg/lib/python3.6/site-packages/t orch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stud2/j/johanb17/PolarSeg-master/network/ptBEV.py", line 141, in f orward
net_return_data = self.BEV_model(out_data)
File "/home/stud/j/johanb17/.conda/envs/Polarseg/lib/python3.6/site-packages/t orch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stud2/j/johanb17/PolarSeg-master/network/BEV_Unet.py", line 27, in forward
x = x.view(new_shape)
RuntimeError: shape '[2, 480, 360, 32, 1]' is invalid for input of size 345600

very small miou

i train this net but i got very small miou

Validation per class iou:
car : 4.74%
bicycle : 0.03%
motorcycle : 0.03%
truck : 0.14%
bus : 0.55%
person : 0.03%
bicyclist : 0.05%
motorcyclist : 0.00%
road : 0.44%
parking : 0.85%
sidewalk : 12.27%
other-ground : 0.08%
building : 3.78%
fence : 1.65%
vegetation : 1.42%
trunk : 1.62%
terrain : 4.21%
pole : 0.37%
traffic-sign : 0.17%
Current val miou is 1.707 while the best val miou is 1.707
Current val loss is 3.895
epoch 6 iter 2610, loss: nan

Inference Time

@YangZhang4065 @edwardzhou130 hi i calculated the inference time for the sequence 8 on 1080TI to run the session only the time taken is around 60 ms . how did u calculate 23fps is it the latency or only the model inference time.
Thanks in advance

train parameters ,lr, and loss

Dear author ,I have run the model ,Could I know some details about the loss ,batchsize and learning rate initial value?

Reallt thanks!

torch_catter.scatter_max ()

Hello, thank you very much for your open source code.

Now I'm trying to convert your code using libtorch, but I run into a tricky problem.

The torch_catter.scatter_max () function cannot be used directly by LibTorch because it uses a dynamic library. I tried to search for the specific meaning of this function, but unfortunately I couldn't find it.

Could you please help to explain the specific function of this function when you have time?

Thank you!!!

RuntimeError: max_pool2d_with_indices_out_cuda_frame failed

Thanks first for the excellent work!

I have run the polar network and it works well. But when I run the traditional network for comparison, there comes a RuntimeError:

  File "train.py", line 195, in <module>
    main(args)
  File "train.py", line 164, in main
    outputs = my_model(train_pt_fea_ten,train_grid_ten)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/shuangjie/Projects/dev/Rand-LA/ref/PolarSeg/network/ptBEV.py", line 139, in forward
    net_return_data = self.BEV_model(out_data)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/shuangjie/Projects/dev/Rand-LA/ref/PolarSeg/network/BEV_Unet.py", line 19, in forward
    x = self.network(x)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/shuangjie/Projects/dev/Rand-LA/ref/PolarSeg/network/BEV_Unet.py", line 46, in forward
    x4 = self.down3(x3)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/shuangjie/Projects/dev/Rand-LA/ref/PolarSeg/network/BEV_Unet.py", line 159, in forward
    x = self.mpconv(x)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/modules/pooling.py", line 141, in forward
    self.return_indices)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/_jit_internal.py", line 181, in fn
    return if_false(*args, **kwargs)
  File "/home/shuangjie/.virtualenvs/polar-seg/lib/python3.6/site-packages/torch/nn/functional.py", line 488, in _max_pool2d
    input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: max_pool2d_with_indices_out_cuda_frame failed with error code 0

Looking forward for your reply.

Other dataset support

Hi:

Thanks for sharing your excellent work.
As you've mentioned in your paper that you've tested your approaches on three different dataset, including SemanticKITTI, A2D2 and Paris-Lille-3D.
Would you like to provide the program or pre-trained weight on these dataset?
Thanks!

realust visualization

Hi,thx for your resource code.
I have one question,
I used test_pretrain.py to get infer labels,but when I use SemanticKitti-API to visualize the labels, The visualization results are dark,as if the label does not correspond to the official color.Did you have a question like this?

KeyError: 'coarse'

When I run 'train_PL. py', I get this error in line 63: KeyError: 'coarse'. I would like to ask you how your coarse_classes.xml file is organized, or what is the meaning of the coarse property?

What is the visibility features ?

Hello, thank you very much for your great work ! I have a question: What is the visibility features and what is the advantage to use it ?

Why calculate IoU from voxel label?

Hi.

I a new to point cloud segmentation. There are problems that have been bothering me a lot.

1). Why does PolarNet (here) calculate IoU from voxel-wise label instead of point-wise label? Even PolarNet directly saves the voxel label to the file for submission (here).

2.) As I learned from semantic-kitti-api, predictions about each point of the scan are required when submitted to the benchmark. Is my understanding correct? Or is it not necessary to predict the label of every point?

Thank you for answering me. It really bothers me very much.

Label encoding

@YangZhang4065 @Xyouz thanks for sharing the source code , can you please let me know what is the use of functions like
" SemKITTI2train_single(label):" and "SemKITTI2train(label):" which is used during training and testing , how Is it modifying the labels during saving

about 'ring convolutions'

@YangZhang4065 @edwardzhou130 Thank you for open sourcing this great repo!
I have a question:
ring convolutions that you mentioned in your paper in section 3.4
where is the implementation of it? Is that here? Just about padding?
Thank you a lot in advance!

x = F.pad(x,(1,1,0,0),mode = 'circular')
x = self.conv1(x)
x = F.pad(x,(1,1,0,0),mode = 'circular')
x = self.conv2(x)

感谢开源这个项目,非常棒!
我有个问题,您在论文3.4章中提到的ring convolution, 它的代码实现在哪?
是这里吗?只是在padding的时候有所不同吗?
期待您的回答!感谢!
x = F.pad(x,(1,1,0,0),mode = 'circular')
x = self.conv1(x)
x = F.pad(x,(1,1,0,0),mode = 'circular')
x = self.conv2(x)

what the set is the pretrained model trained on?

As the most paper's experiment setting about kitti is that use sequence 00-10 as train set except 08 as validation set?does the pretrained model trained on the whole 00-10 sequence or sequence 00-10 as train set except 08 as validation set?
Thks!

Inference on custom dataset ( point cloud file in the same format of SemanticKitty (.bin)

Hello and thanks for your work!
I generated my dataset using a simulator, trying to create point clouds as similar as possible to those of SemanticKitty, then with a 64 beam velodyne and trying to use the same fov parameters and number of points.
The problem is that I only have the .bin files but I don't have the labels, I would simply like to use your network to make inference on my points and generate the labels, what can I do? Thank you

training index out of bound error

Hi @edwardzhou130
Thanks for your effort on this.
I tried to start the training with semantic kitti seq 04 with default arguments - got the following

Namespace(check_iter=4000, data_dir='data', grid_size=[480, 360, 32], model='polar', model_save_path='./SemKITTI_PolarSeg.pt', train_batch_size=2, val_batch_size=2)
0%| | 0/136 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 198, in
main(args)
File "train.py", line 118, in main
val_vox_label = SemKITTI2train(val_vox_label)
File "train.py", line 39, in SemKITTI2train
return SemKITTI2train_single(label)
File "train.py", line 44, in SemKITTI2train_single
label[remove_ind] = 255
IndexError: index 17592186044634 is out of bounds for dimension 2 with size 360

So I tried changing the batch & val_size to 1 and grid size to [320,240,32]

Namespace(check_iter=4000, data_dir='data', grid_size=[320, 240, 32], model='polar', model_save_path='./SemKITTI_PolarSeg.pt', train_batch_size=1, val_batch_size=1)
0%| | 0/271 [00:00<?, ?it/s]

Traceback (most recent call last):
File "train.py", line 198, in
main(args)
File "train.py", line 118, in main
val_vox_label = SemKITTI2train(val_vox_label)
File "train.py", line 39, in SemKITTI2train
return SemKITTI2train_single(label)
File "train.py", line 44, in SemKITTI2train_single
label[remove_ind] = 255
IndexError: index 227 is out of bounds for dimension 0 with size 1

Can you help me figure this. Thanks in advance

Training on custom dataset

Hello Team,

Thank you so much for great work.

I am trying to use this code base for custom dataset. For example I have two classes road and car. In one file I have ground truth labels(points) for these two classes are: road: 895735 and car: 157860.
After calling function named spherical_dataset in processed_label variable I am getting a very few point labels for example: road: 274 and car: 09.
Q1 :Will these much points enough for training?

In the code base, for SemKitti there is fixed_volume_space. At the time of calling function I made that argument False, and now I am calculating volume_space from the data after converting it into Cartesian to Polar.

Q2: Is this the proper way? or can you please let me know how do you fixed that volume space? I am not getting it. and also could that be the reason for less number of points per class(after calling spherical_dataset).

As you mentioned in the paper, you tried different grid sizes for Semkitti and then you fixed the grid size.
Q3: How it will affect on custom dataset? How we can choose grid size ? If I am increasing grid size my gpu(32 gb capacity, with 1 batch size) is not able to handle it for custom dataset.

Can you please help me out with my questions. It would be great help.

Thanks and Regards,
Resha Thacker

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.