GithubHelp home page GithubHelp logo

qingyonghu / randla-net Goto Github PK

View Code? Open in Web Editor NEW
1.3K 30.0 317.0 384.2 MB

🔥RandLA-Net in Tensorflow (CVPR 2020, Oral & IEEE TPAMI 2021)

License: Other

Python 93.89% C++ 6.11%
semantic-segmentation 3d-vision computer-vision semantic3d s3dis semantickitti

randla-net's Introduction

PWC PWC License CC BY-NC-SA 4.0

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds (CVPR 2020)

This is the official implementation of RandLA-Net (CVPR2020, Oral presentation), a simple and efficient neural architecture for semantic segmentation of large-scale 3D point clouds. For technical details, please refer to:

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds
Qingyong Hu, Bo Yang*, Linhai Xie, Stefano Rosa, Yulan Guo, Zhihua Wang, Niki Trigoni, Andrew Markham.
[Paper] [Video] [Blog] [Project page]

(1) Setup

This code has been tested with Python 3.5, Tensorflow 1.11, CUDA 9.0 and cuDNN 7.4.1 on Ubuntu 16.04.

  • Clone the repository
git clone --depth=1 https://github.com/QingyongHu/RandLA-Net && cd RandLA-Net
  • Setup python environment
conda create -n randlanet python=3.5
source activate randlanet
pip install -r helper_requirements.txt
sh compile_op.sh

Update 03/21/2020, pre-trained models and results are available now. You can download the pre-trained models and results here. Note that, please specify the model path in the main function (e.g., main_S3DIS.py) if you want to use the pre-trained model and have a quick try of our RandLA-Net.

(2) S3DIS

S3DIS dataset can be found here. Download the files named "Stanford3dDataset_v1.2_Aligned_Version.zip". Uncompress the folder and move it to /data/S3DIS.

  • Preparing the dataset:
python utils/data_prepare_s3dis.py
  • Start 6-fold cross validation:
sh jobs_6_fold_cv_s3dis.sh
  • Move all the generated results (*.ply) in /test folder to /data/S3DIS/results, calculate the final mean IoU results:
python utils/6_fold_cv.py

Quantitative results of different approaches on S3DIS dataset (6-fold cross-validation):

a

Qualitative results of our RandLA-Net:

2 z

(3) Semantic3D

7zip is required to uncompress the raw data in this dataset, to install p7zip:

sudo apt-get install p7zip-full
  • Download and extract the dataset. First, please specify the path of the dataset by changing the BASE_DIR in "download_semantic3d.sh"
sh utils/download_semantic3d.sh
  • Preparing the dataset:
python utils/data_prepare_semantic3d.py
  • Start training:
python main_Semantic3D.py --mode train --gpu 0
  • Evaluation:
python main_Semantic3D.py --mode test --gpu 0

Quantitative results of different approaches on Semantic3D (reduced-8):

a

Qualitative results of our RandLA-Net:

z z
z z

Note:

  • Preferably with more than 64G RAM to process this dataset due to the large volume of point cloud

(4) SemanticKITTI

SemanticKITTI dataset can be found here. Download the files related to semantic segmentation and extract everything into the same folder. Uncompress the folder and move it to /data/semantic_kitti/dataset.

  • Preparing the dataset:
python utils/data_prepare_semantickitti.py
  • Start training:
python main_SemanticKITTI.py --mode train --gpu 0
  • Evaluation:
sh jobs_test_semantickitti.sh

Quantitative results of different approaches on SemanticKITTI dataset:

s

Qualitative results of our RandLA-Net:

zzz

(5) Demo

Citation

If you find our work useful in your research, please consider citing:

@article{hu2019randla,
  title={RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds},
  author={Hu, Qingyong and Yang, Bo and Xie, Linhai and Rosa, Stefano and Guo, Yulan and Wang, Zhihua and Trigoni, Niki and Markham, Andrew},
  journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

@article{hu2021learning,
  title={Learning Semantic Segmentation of Large-Scale Point Clouds with Random Sampling},
  author={Hu, Qingyong and Yang, Bo and Xie, Linhai and Rosa, Stefano and Guo, Yulan and Wang, Zhihua and Trigoni, Niki and Markham, Andrew},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

Acknowledgment

  • Part of our code refers to nanoflann library and the the recent work KPConv.
  • We use blender to make the video demo.

License

Licensed under the CC BY-NC-SA 4.0 license, see LICENSE.

Updates

  • 21/03/2020: Updating all experimental results
  • 21/03/2020: Adding pretrained models and results
  • 02/03/2020: Code available!
  • 15/11/2019: Initial release!

Related Repos

  1. SoTA-Point-Cloud: Deep Learning for 3D Point Clouds: A Survey GitHub stars
  2. SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds GitHub stars
  3. 3D-BoNet: Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds GitHub stars
  4. SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration GitHub stars
  5. SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds with 1000x Fewer Labels GitHub stars

randla-net's People

Contributors

dependabot[bot] avatar qingyonghu avatar yang7879 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

randla-net's Issues

about the visualization demo

Hi, authors,

Your visualization demos look very attractive. Did you use any other 3rd party tool to generate them after obtaining the segmentation outputs?

About testing question

您好,关于测试的脚本有点疑惑。tester_S3DIS.py 57行,测试的时候用到了dataset.val_labels,但是在实际的测试中,我是不知道每个point的label,那么我测试的时候val_label该怎么初始化?期待您的回复!感谢~

segmentation fault (core dumped)

@QingyongHu 我在训练sementicKITTI的过程了反复遇到segmentation fault的问题,不能正常训练下去。请问您知道这可能是什么问题吗?
电脑的配置是gtx 1080ti和64g内存。

mean accuracy

Hello, I am honored to read your paper and have benefited a lot from it. However, I have a question at present, I can't find the code where to output the mAcc of dataset, such as S3DIS. This may be a very simple question, but I sincerely hope to get your answer, thank you!

dilation rate schedule

Thank you for your good works! Could you tell me the dilation rate applied in each layer on the S3DIS segmentation?

Question about inference

Dear HuQingyong,
I have trained the network with my data. It works perfectly. But I have some problem with the inference. with --mode test. my data have 5 classes and I have set class 0 as ignore in training. And the inference has only 4 labels from 1 to 4. But label 0 is the background label for my class. Which means that I have only 4 probs instead of 5 (include background). How could I fix this?

Many thanks!

Instance segmentation

What would be a good way to take the results of semantic segmentation such as this as input and produce instance segmentation as output? References to published work appreciated.

I am aware that there have been works that have focused on training networks to directly do instance segmentation. But, to build upon "semantic segmentation only" works such as this, a stand-alone "semantic segmentation to instance segmentation of point clouds" work will be helpful.

Thanks.

Question about

Hi,
Thank you for publishing your code.
I have a few questions about the code:

  1. In the "building_block" function in RandLANet.py, in line 287, the input for the conv2d layer (mlp2) is f_xyz:
    f_xyz = helper_tf_util.conv2d(f_xyz, d_out // 2, [1, 1], name + 'mlp2', [1, 1], 'VALID', True, is_training)
    which was defined earlier in lines 281-282:
    f_xyz = self.relative_pos_encoding(xyz, neigh_idx)
    f_xyz = helper_tf_util.conv2d(f_xyz, d_in, [1, 1], name + 'mlp1', [1, 1], 'VALID', True, is_training)
    So the input for the mlp2 layer is the output of the mlp1 layer.
    Isn't mlp2 supposed to get as an input the relative_pos_encoding without going through mlp1?
    somthing like:
    f_xyz = self.relative_pos_encoding(xyz, neigh_idx)
    f_xyz_1 = helper_tf_util.conv2d(f_xyz, d_in, [1, 1], name + 'mlp1', [1, 1], 'VALID', True, is_training)
    f_xyz_2 = helper_tf_util.conv2d(f_xyz, d_in, [1, 1], name + 'mlp1', [1, 1], 'VALID', True, is_training)

  2. In line 272 the output of LFA shouldn't be d_out instead of d_out/2? If I understand correctly, in the paper, in the Dilated Residual Block diagram, the output of the second Attentive pooling block (that is d_out) is the output of LFA.

  3. In the paper, describing the Relative Point Position Encoding, you mention that you take the norm of relative position ( || || ). In the code, in line 297 it seems that you take the square norm of the relative distance. So what is right?

Thank you very much

Error while running main_Semantic3d.py

Hello,
I'm using my own data for training the model and have prepared as per the instructions with the same environment as mentioned in the documentation.
But, when I run the training file - main_Semantic3d.py, just before the 1st epoch begins, I get the error for the line
queried_pt_weight = np.array([self.class_weight[split][0][n] for n in queried_pc_labels])
with the msg - too many array indices.
I tried fixing it but did not understand why is there a loop inside. In self.class_weights, we have an array for each of the 3 splits with only one integer. Please help me out.

Custom data training

Hello @QingyongHu, Thanks for the great work, I've been working on it since a few days.
I was testing the model on my data which has around 3-5 million points.
I've finished testing, although when I try to run it in the test mode, I get fewer label outputs than the original number of points.
Even the original_ply files have less number of points compared to my txt files input.
What could be the reason?

Training Time

Hi! Thanks for your opensourced code.
I am new to this field.
I wonder the training time of RandLANet on these three large scale datasets. It seems you only use one GPU for training. So how long it costs to achieves the paper results?

hyper-parameters during training SemanticKITTI ?

hello @QingyongHu

what the value of batch-size and N when training SemanticKITTI ?

in the 3.4. Implementation of your paper, it said used the fixed N ~= 1e6, but the number point of each scan of SemanticKITTI is less than 1e-6;

another, how the training N value influence the test performance? for example there is 2 training config,
a. use N=1e-4 during training
b. use N=1e-5 during training

which config will get best performance on SemanticKITTI?

thanks!

Question about the data process of Scannet

Hi @QingyongHu
Thank you for sharing the wonderful code!
I want to train the model on Scannet dataset. Could you please give me some insight suggestions about the data process on Scannet? Whether the data is processed in a manner similar to S3DIS?

Question about in determining train_steps/val_steps in helper_tool.py

Hi, @QingyongHu ,

Thanks a lot for the release of you package. I had tried the training on S3DIS dataset and it runs smoothly.

One place confusing me is the train_steps/val_steps in helper_tool.py. Why isn't the train_steps/val_steps configured as the total size of the training/val set divided by the batch_size?

It is shown that the train_steps/val_steps is configured as the same value for all 6 area sets in S3DIS. Will it miss some parts for longer set such as Area-5?

THX!

Visualization of semanticKITTI

After running the test program on the semanticKITTI dataset, I got some .label files, but the number of points in the label file is different from the original point cloud file. I want to know how to use these label files to visualize the point cloud in the test sequences?
Looking forward to your reply

Memory leak in data_prepare_semantickitti.py (+ how to fix)

Howdy friends,

There's a memory leak in the data_prepare_semantickitti.py, caused by pickling the numpy ndarray (talking about the line pickle.dump(search_tree, f)).

Apparently this is due to a bug in numpy 1.16.0 (numpy/numpy#12896), fixed in 1.16.1, so I recommend you update requirements list.

On my 16GB machine, this caused the datapreproc script to crash.

Fix: pip install numpy==1.16.1

Cheers,
Aljosa

Sparse Indexed slices and segmentation fault - Semantic3D

Hello,
I am trying to train the Semantic3D dataset (Python 3.6, TF 1.11, Ubuntu 18.04) but I get the following warning, and I don't know what it means:

UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

Then, before the first epoch runs, I get a segmentation fault error (core dumped). Do you have suggestions on what I could do?

Thank you for any help!

about the code

Hi, authors,

Is the code for RandLA-Net released? Where could I find it?

THX!

Clarification request for data_prepare_semantickitti.py

Hi, thank you for your great work, really good paper!

I have an issue with data_prepare_semantickitti.py: when I run it, I get the following error:
AttributeError: module 'cpp_wrappers.cpp_subsampling.grid_subsampling' has no attribute 'compute'
I did run sh compile_op.sh and installed the required libraries.

  • Do you have any suggestion?

I see the preparation file subsamples the points according to a grid, then creates a KDTree with the remaining ones.
However, I do not understand the reasons behind this preparation, as it is not mentioned in the paper. Is it an organized way to reduce the amount of points, which then helps towards same-sized samples in a batch?

  • Could you please explain it?

Thank you!

Visualization problems

请问在test文件夹生成.ply文件之后,应该怎么用hepler_tool.py里面的Plot类可视化他们?(我已经运行main_S3DIS.py的test了)

Semantic KITTI error: from open3d import linux as open3d

Hi,

When following the instructions for inference/training on Semantic KITTI, I got an error running data_prepare_semantickitti.py (error importing from open3d import linux as open3d). I followed the instructions step-by-step.

Fix: I've upgraded Open3D from 0.8 (recommended) to 0.9 (latest): pip install open3d==0.9

Donot generated /test floder

hi,i run:

  • sh jobs_6_fold_cv_s3dis.sh
    But did not generate the test folder containing the ply file.so I can't continue to run the code.
  1. “Move all the generated results (*.ply) in /test folder to /data/S3DIS/results, calculate the final mean IoU results:”

note:Generated many ply files in these folders:RandLA-Net/data/S3DIS/input_0.040/ RandLA-Net/data/S3DIS/input_0.040/original_ply

SemanticKITTI evaluation error

Hello,

When I try to perform the evaluation for the SemanticKITTI dataset using sh jobs_test_semantickitti.sh, I get the following error:

Initiating input pipelines
WARNING:tensorflow:From /media/munib/Munib_Ahsan/pcss/RandLA-Net/RandLANet.py:265: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See `tf.nn.softmax_cross_entropy_with_logits_v2`.

/home/media-server/.pyenv/versions/anaconda3-5.0.0/envs/randlanet/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Traceback (most recent call last):
  File "main_SemanticKITTI.py", line 214, in <module>
    if FLAGS.mode_path is not 'None':
AttributeError: 'Namespace' object has no attribute 'mode_path'

I don't understand why this is happening because even if the model path is not specified, the following code in main_SemanticKITTI.py should account for this:

        if FLAGS.model_path is not 'None':
            chosen_snap = FLAGS.model_path
        else:
            chosen_snapshot = -1
            logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')])
            chosen_folder = logs[-1]
            snap_path = join(chosen_folder, 'snapshots')
            snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta']
            chosen_step = np.sort(snap_steps)[-1]
            chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step))
        tester = ModelTester(model, dataset, restore_snap=chosen_snap)
        tester.test(model, dataset)

The results folder exists and contains multiple log folders that each contain multiple snapshots. Is there something that I am doing wrong?

Question about my own test data based on Semantic3D

@QingyongHu
Thanks for you sharing the wonderful code!
I have a outdoor test data about 4 million points and run the test on the pre-trained Semantic3D model you provided. By using data_prepare_semantic3d.py, the ply and kpl files can be generated sucessfully. But the test result is not so good and basically all the points are labled only with "hard scape'".

Here is the question:
You do "queried_pc_xyz[:, 0:2] = queried_pc_xyz[:, 0:2] - pick_point[:, 0:2]" and leave Z out. In my test data, Z value is from 101 to 130. These Z value range are hudge different from the values in Semantic3D dataset.
Is necessary to normalize coordinate Z value? or any other suggestion?

关于代码运行问题

请问一下,运行compile_op.sh这个进行编译的时候对gcc,g++版本有特别要求吗。我在编译成果中代码出现了warning。之后运行jobs_6_fold_cv_s3dis.sh的时候出现了参数错误提示。

Custom dataset training

@QingyongHu @Yang7879 thanks for sharing your wonderful work , it was of great help to me but i am having few queries regarding the training on custom dataset
Q1. The density of points of my custom data is sparser compared to density of points in semanttic dataset so what precautions should i take when training the data using the source code
Q2. what is the minimum number of data is required training so as to see decent enough of results
Q3. can we overfit the model during if so which params i have tune to perform this operation

Thanks in advance

No module named 'sklearn.neighbors._kd_tree'

@QingyongHu Hello! Thanks for your released code. When I run python main_Semantic3D.py --mode train --gpu 0, an error oucured as below:

Load_pc_0: bildstein_station1_xyz_intensity_rgb
Traceback (most recent call last):
File "main_Semantic3D.py", line 341, in
dataset = Semantic3D()
File "main_Semantic3D.py", line 92, in init
self.load_sub_sampled_clouds(cfg.sub_grid_size)
File "main_Semantic3D.py", line 123, in load_sub_sampled_clouds
search_tree = pickle.load(f)
ModuleNotFoundError: No module named 'sklearn.neighbors._kd_tree'

How can I solve it? Thank you!

error when running data_prepare_semantickitti.py

Hi, @QingyongHu ,

When I run python utils/data_prepare_semantickitti.py, I got the following error after finishing seq00 (000000.bin to 004540.bin) and starting seq01:

004536.bin
004537.bin
004538.bin
004539.bin
004540.bin
sequence01 start
000000.bin
000000.npy
Traceback (most recent call last):
  File "utils/data_prepare_semantickitti.py", line 44, in <module>
    sub_points, sub_labels = DP.grid_sub_sampling(points, labels=labels, grid_size=grid_size)
  File "/data/code10/RandLA-Net/helper_tool.py", line 211, in grid_sub_sampling
    return cpp_subsampling.compute(points, classes=labels, sampleDl=grid_size, verbose=verbose)
RuntimeError: Wrong dimensions : classes.shape is not (N,) or (N, d)

Any suggestion to fix this issue?

THX!

prediction results

hi qingyong:
请问,在semantic3上面跑完test之后应该如何可视化?能够看到原始的和label和预测的。

Clarification about main_Semantic3D

Hi @QingyongHu,
first of all, thank you for sharing your work. I managed to train the model with my own data achieving promising results!
I'm trying to implement your code in pytorch for my convenience, so maybe this won't be my last doubt :P
Looking at line 208 in main_Semantic3D.py you do this thing:
queried_pc_xyz[:, 0:2] = queried_pc_xyz[:, 0:2] - pick_point[:, 0:2]
I would say that you normalize the x and y coordinates of the queried points with respect to queried center. Why do you leave z out? Is there any particular reason I'm missing?
Thank you,
Gabry

motorcyclist label

Hello. Firstly,thank you for your work. When I was training SemanticKITTI, I noticed that the mini target motorcyclist tag was always 0 (max epoch 100), and I noticed that the label in your article was 4.6% and on github it was 7.2%. Did you modify the training parameters or use some data enhancement techniques?
Thank you for any help!

how to visualize the semantic segmentation results on S3DIS dataset?

HI @QingyongHu , thx for your released code. Could you please share your way to visualize your results (e.g.: on S3DIS dataset) ? Because I found that the results ('test/Log_***********/Area_**********.ply' files) only have pred and label, without x, y, z, and I have no idea how to make these .ply files visualized directly. Thanks in advance.

The efficiency of KNN

Hi, I'm very interested in your work but I had a question about the computation of KNN.
The network is efficient due to the random sampling strategy. But I notice that the network still uses KNN to find each point's neighbor. When the point cloud contains 10^6 points, calculating per point's KNN is very time-consuming. Do you pre-calculate the KNN results before feeding point clouds to the network?

直接拿预训练好的模型,预测,出错

你好,胡博。我现在直接拿你们提供的训练好的模型预测 S3DIS数据集第一个房间的数据。运行python -B main_S3DIS.py --gpu 0 --mode test --test_area 1 --model_path /media/kfr/Elements\ SE/RandLA-Net/RandLA-Net/models/S3DIS/Area_1
出现下面错误,

Traceback (most recent call last):
  File "main_S3DIS.py", line 243, in <module>
    if FLAGS.mode_path is not 'None':
AttributeError: 'Namespace' object has no attribute 'mode_path'

完整的错误如下:
`Area_1_conferenceRoom_1_KDTree.pkl 2.2 MB loaded in 0.7s
Area_1_conferenceRoom_2_KDTree.pkl 3.0 MB loaded in 0.2s
Area_1_copyRoom_1_KDTree.pkl 1.0 MB loaded in 0.1s
Area_1_hallway_1_KDTree.pkl 0.6 MB loaded in 0.1s
Area_1_hallway_2_KDTree.pkl 1.0 MB loaded in 0.1s
Area_1_hallway_3_KDTree.pkl 0.7 MB loaded in 0.1s
Area_1_hallway_4_KDTree.pkl 0.7 MB loaded in 0.1s
Area_1_hallway_5_KDTree.pkl 0.9 MB loaded in 0.1s
Area_1_hallway_6_KDTree.pkl 7.1 MB loaded in 0.2s
Area_1_hallway_7_KDTree.pkl 6.2 MB loaded in 0.2s
Area_1_hallway_8_KDTree.pkl 2.3 MB loaded in 0.1s
Area_1_office_1_KDTree.pkl 1.7 MB loaded in 0.1s
Area_1_office_10_KDTree.pkl 1.5 MB loaded in 0.0s
Area_1_office_11_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_12_KDTree.pkl 1.6 MB loaded in 0.1s
Area_1_office_14_KDTree.pkl 1.3 MB loaded in 0.1s
Area_1_office_15_KDTree.pkl 0.9 MB loaded in 0.0s
Area_1_office_16_KDTree.pkl 2.7 MB loaded in 0.0s
Area_1_office_17_KDTree.pkl 1.7 MB loaded in 0.0s
Area_1_office_18_KDTree.pkl 1.5 MB loaded in 0.0s
Area_1_office_19_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_2_KDTree.pkl 1.7 MB loaded in 0.0s
Area_1_office_20_KDTree.pkl 1.7 MB loaded in 0.0s
Area_1_office_21_KDTree.pkl 1.7 MB loaded in 0.0s
Area_1_office_22_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_23_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_24_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_25_KDTree.pkl 1.5 MB loaded in 0.0s
Area_1_office_26_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_27_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_28_KDTree.pkl 1.4 MB loaded in 0.0s
Area_1_office_3_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_30_KDTree.pkl 3.7 MB loaded in 0.0s
Area_1_office_31_KDTree.pkl 7.4 MB loaded in 0.0s
Area_1_office_4_KDTree.pkl 1.2 MB loaded in 0.0s
Area_1_office_5_KDTree.pkl 1.2 MB loaded in 0.0s
Area_1_office_6_KDTree.pkl 1.2 MB loaded in 0.0s
Area_1_office_7_KDTree.pkl 1.2 MB loaded in 0.0s
Area_1_office_8_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_office_9_KDTree.pkl 1.6 MB loaded in 0.0s
Area_1_pantry_1_KDTree.pkl 1.0 MB loaded in 0.0s
Area_1_WC_1_KDTree.pkl 2.1 MB loaded in 0.0s
Area_2_auditorium_1_KDTree.pkl 13.8 MB loaded in 0.0s
Area_2_auditorium_2_KDTree.pkl 18.4 MB loaded in 0.4s
Area_2_conferenceRoom_1_KDTree.pkl 4.3 MB loaded in 0.0s
Area_2_hallway_1_KDTree.pkl 0.5 MB loaded in 0.0s
Area_2_hallway_10_KDTree.pkl 4.7 MB loaded in 0.0s
Area_2_hallway_12_KDTree.pkl 2.3 MB loaded in 0.0s
Area_2_hallway_2_KDTree.pkl 2.1 MB loaded in 0.0s
Area_2_hallway_3_KDTree.pkl 2.2 MB loaded in 0.0s
Area_2_hallway_4_KDTree.pkl 0.7 MB loaded in 0.0s
Area_2_hallway_5_KDTree.pkl 5.5 MB loaded in 0.1s
Area_2_hallway_6_KDTree.pkl 0.5 MB loaded in 0.2s
Area_2_hallway_7_KDTree.pkl 0.5 MB loaded in 0.1s
Area_2_hallway_8_KDTree.pkl 7.6 MB loaded in 0.3s
Area_2_hallway_9_KDTree.pkl 0.7 MB loaded in 0.1s
Area_2_office_1_KDTree.pkl 0.8 MB loaded in 0.1s
Area_2_office_10_KDTree.pkl 1.0 MB loaded in 0.1s
Area_2_office_11_KDTree.pkl 1.2 MB loaded in 0.1s
Area_2_office_12_KDTree.pkl 1.2 MB loaded in 0.1s
Area_2_office_13_KDTree.pkl 1.1 MB loaded in 0.1s
Area_2_office_14_KDTree.pkl 2.7 MB loaded in 0.1s
Area_1_office_13_KDTree.pkl 1.3 MB loaded in 0.1s
Area_1_office_29_KDTree.pkl 1.0 MB loaded in 0.1s
Area_2_hallway_11_KDTree.pkl 0.9 MB loaded in 0.1s
Area_2_office_2_KDTree.pkl 0.9 MB loaded in 0.1s
Area_2_WC_1_KDTree.pkl 1.1 MB loaded in 0.1s
Area_3_office_6_KDTree.pkl 1.5 MB loaded in 0.1s
Area_4_hallway_14_KDTree.pkl 2.1 MB loaded in 0.1s
Area_4_office_15_KDTree.pkl 1.3 MB loaded in 0.1s
Area_4_storage_2_KDTree.pkl 0.6 MB loaded in 0.1s
Area_5_hallway_15_KDTree.pkl 2.4 MB loaded in 0.1s
Area_5_office_15_KDTree.pkl 2.5 MB loaded in 0.1s
Area_5_office_3_KDTree.pkl 1.5 MB loaded in 0.1s
Area_5_office_7_KDTree.pkl 1.6 MB loaded in 0.1s
Area_6_hallway_5_KDTree.pkl 1.1 MB loaded in 0.1s
Area_6_office_21_KDTree.pkl 1.1 MB loaded in 0.1s
Area_2_office_3_KDTree.pkl 1.1 MB loaded in 0.1s
Area_2_office_4_KDTree.pkl 2.2 MB loaded in 0.1s
Area_2_office_5_KDTree.pkl 2.2 MB loaded in 0.1s
Area_2_office_6_KDTree.pkl 0.9 MB loaded in 0.1s
Area_2_office_7_KDTree.pkl 0.9 MB loaded in 0.1s
Area_2_office_8_KDTree.pkl 1.1 MB loaded in 0.1s
Area_2_office_9_KDTree.pkl 1.0 MB loaded in 0.1s
Area_2_storage_1_KDTree.pkl 0.4 MB loaded in 0.0s
Area_2_storage_2_KDTree.pkl 0.7 MB loaded in 0.0s
Area_2_storage_3_KDTree.pkl 0.7 MB loaded in 0.0s
Area_2_storage_4_KDTree.pkl 0.8 MB loaded in 0.0s
Area_2_storage_5_KDTree.pkl 0.6 MB loaded in 0.0s
Area_2_storage_6_KDTree.pkl 0.5 MB loaded in 0.0s
Area_2_storage_7_KDTree.pkl 0.7 MB loaded in 0.0s
Area_2_storage_8_KDTree.pkl 0.5 MB loaded in 0.0s
Area_2_storage_9_KDTree.pkl 0.5 MB loaded in 0.0s
Area_2_WC_2_KDTree.pkl 1.6 MB loaded in 0.0s
Area_3_conferenceRoom_1_KDTree.pkl 2.4 MB loaded in 0.1s
Area_3_hallway_1_KDTree.pkl 1.8 MB loaded in 0.1s
Area_3_hallway_2_KDTree.pkl 2.2 MB loaded in 0.0s
Area_3_hallway_3_KDTree.pkl 0.3 MB loaded in 0.0s
Area_3_hallway_4_KDTree.pkl 2.9 MB loaded in 0.0s
Area_3_hallway_5_KDTree.pkl 0.2 MB loaded in 0.0s
Area_3_hallway_6_KDTree.pkl 0.4 MB loaded in 0.0s
Area_3_lounge_1_KDTree.pkl 2.0 MB loaded in 0.0s
Area_3_lounge_2_KDTree.pkl 3.7 MB loaded in 0.0s
Area_3_office_1_KDTree.pkl 1.5 MB loaded in 0.0s
Area_3_office_10_KDTree.pkl 1.1 MB loaded in 0.0s
Area_3_office_2_KDTree.pkl 1.5 MB loaded in 0.1s
Area_3_office_3_KDTree.pkl 1.6 MB loaded in 0.1s
Area_3_office_4_KDTree.pkl 1.7 MB loaded in 0.0s
Area_3_office_5_KDTree.pkl 1.5 MB loaded in 0.0s
Area_3_office_7_KDTree.pkl 1.7 MB loaded in 0.0s
Area_3_office_8_KDTree.pkl 2.0 MB loaded in 0.0s
Area_3_office_9_KDTree.pkl 1.2 MB loaded in 0.0s
Area_3_storage_1_KDTree.pkl 0.4 MB loaded in 0.0s
Area_3_storage_2_KDTree.pkl 0.2 MB loaded in 0.0s
Area_3_WC_1_KDTree.pkl 2.0 MB loaded in 0.0s
Area_3_WC_2_KDTree.pkl 2.1 MB loaded in 0.1s
Area_4_conferenceRoom_1_KDTree.pkl 2.1 MB loaded in 0.0s
Area_4_conferenceRoom_2_KDTree.pkl 3.1 MB loaded in 0.0s
Area_4_conferenceRoom_3_KDTree.pkl 1.9 MB loaded in 0.0s
Area_4_hallway_1_KDTree.pkl 2.6 MB loaded in 0.0s
Area_4_hallway_10_KDTree.pkl 2.4 MB loaded in 0.0s
Area_4_hallway_11_KDTree.pkl 1.4 MB loaded in 0.0s
Area_4_hallway_12_KDTree.pkl 1.8 MB loaded in 0.0s
Area_4_hallway_13_KDTree.pkl 1.7 MB loaded in 0.0s
Area_4_hallway_2_KDTree.pkl 2.5 MB loaded in 0.0s
Area_4_hallway_3_KDTree.pkl 3.9 MB loaded in 0.0s
Area_4_hallway_4_KDTree.pkl 2.2 MB loaded in 0.0s
Area_4_hallway_5_KDTree.pkl 0.2 MB loaded in 0.0s
Area_4_hallway_6_KDTree.pkl 0.3 MB loaded in 0.0s
Area_4_hallway_7_KDTree.pkl 2.9 MB loaded in 0.1s
Area_4_hallway_8_KDTree.pkl 0.2 MB loaded in 0.1s
Area_4_hallway_9_KDTree.pkl 0.7 MB loaded in 0.1s
Area_4_lobby_1_KDTree.pkl 7.6 MB loaded in 0.2s
Area_4_lobby_2_KDTree.pkl 3.9 MB loaded in 0.1s
Area_4_office_1_KDTree.pkl 1.1 MB loaded in 0.1s
Area_4_office_10_KDTree.pkl 2.1 MB loaded in 0.1s
Area_4_office_11_KDTree.pkl 1.3 MB loaded in 0.0s
Area_4_office_12_KDTree.pkl 1.2 MB loaded in 0.1s
Area_4_office_13_KDTree.pkl 1.3 MB loaded in 0.1s
Area_4_office_14_KDTree.pkl 0.5 MB loaded in 0.1s
Area_4_office_16_KDTree.pkl 4.1 MB loaded in 0.1s
Area_4_office_17_KDTree.pkl 2.0 MB loaded in 0.1s
Area_4_office_18_KDTree.pkl 1.5 MB loaded in 0.1s
Area_4_office_19_KDTree.pkl 1.3 MB loaded in 0.1s
Area_4_office_2_KDTree.pkl 1.0 MB loaded in 0.0s
Area_4_office_20_KDTree.pkl 1.8 MB loaded in 0.1s
Area_4_office_21_KDTree.pkl 1.8 MB loaded in 0.1s
Area_4_office_22_KDTree.pkl 1.0 MB loaded in 0.0s
Area_4_office_3_KDTree.pkl 1.1 MB loaded in 0.1s
Area_4_office_4_KDTree.pkl 1.6 MB loaded in 0.1s
Area_4_office_5_KDTree.pkl 1.8 MB loaded in 0.1s
Area_4_office_6_KDTree.pkl 1.3 MB loaded in 0.1s
Area_4_office_7_KDTree.pkl 1.3 MB loaded in 0.1s
Area_4_office_8_KDTree.pkl 1.2 MB loaded in 0.1s
Area_4_office_9_KDTree.pkl 2.0 MB loaded in 0.1s
Area_4_storage_1_KDTree.pkl 0.3 MB loaded in 0.0s
Area_4_storage_3_KDTree.pkl 0.7 MB loaded in 0.1s
Area_4_storage_4_KDTree.pkl 0.6 MB loaded in 0.1s
Area_4_WC_1_KDTree.pkl 0.5 MB loaded in 0.1s
Area_4_WC_2_KDTree.pkl 0.9 MB loaded in 0.0s
Area_4_WC_3_KDTree.pkl 0.4 MB loaded in 0.1s
Area_4_WC_4_KDTree.pkl 1.2 MB loaded in 0.1s
Area_5_conferenceRoom_1_KDTree.pkl 1.9 MB loaded in 0.1s
Area_5_conferenceRoom_2_KDTree.pkl 3.7 MB loaded in 0.1s
Area_5_conferenceRoom_3_KDTree.pkl 2.9 MB loaded in 0.1s
Area_5_hallway_1_KDTree.pkl 6.3 MB loaded in 0.2s
Area_5_hallway_10_KDTree.pkl 2.2 MB loaded in 0.1s
Area_5_hallway_11_KDTree.pkl 1.4 MB loaded in 0.1s
Area_5_hallway_12_KDTree.pkl 1.4 MB loaded in 0.1s
Area_5_hallway_13_KDTree.pkl 2.4 MB loaded in 0.1s
Area_5_hallway_14_KDTree.pkl 1.4 MB loaded in 0.0s
Area_5_hallway_2_KDTree.pkl 8.1 MB loaded in 0.0s
Area_5_hallway_3_KDTree.pkl 2.3 MB loaded in 0.0s
Area_5_hallway_4_KDTree.pkl 1.9 MB loaded in 0.0s
Area_5_hallway_5_KDTree.pkl 5.1 MB loaded in 0.0s
Area_5_hallway_6_KDTree.pkl 2.0 MB loaded in 0.0s
Area_5_hallway_7_KDTree.pkl 2.2 MB loaded in 0.0s
Area_5_hallway_8_KDTree.pkl 0.9 MB loaded in 0.0s
Area_5_hallway_9_KDTree.pkl 2.2 MB loaded in 0.0s
Area_5_lobby_1_KDTree.pkl 2.0 MB loaded in 0.0s
Area_5_office_1_KDTree.pkl 1.6 MB loaded in 0.0s
Area_5_office_10_KDTree.pkl 1.5 MB loaded in 0.0s
Area_5_office_11_KDTree.pkl 1.7 MB loaded in 0.0s
Area_5_office_12_KDTree.pkl 1.6 MB loaded in 0.0s
Area_5_office_13_KDTree.pkl 2.0 MB loaded in 0.0s
Area_5_office_14_KDTree.pkl 2.3 MB loaded in 0.0s
Area_5_office_16_KDTree.pkl 1.4 MB loaded in 0.0s
Area_5_office_17_KDTree.pkl 1.8 MB loaded in 0.0s
Area_5_office_18_KDTree.pkl 2.4 MB loaded in 0.0s
Area_5_office_19_KDTree.pkl 3.1 MB loaded in 0.0s
Area_5_office_2_KDTree.pkl 1.7 MB loaded in 0.0s
Area_5_office_20_KDTree.pkl 1.0 MB loaded in 0.0s
Area_5_office_21_KDTree.pkl 3.8 MB loaded in 0.0s
Area_5_office_22_KDTree.pkl 1.5 MB loaded in 0.0s
Area_5_office_23_KDTree.pkl 1.7 MB loaded in 0.0s
Area_5_office_24_KDTree.pkl 3.1 MB loaded in 0.0s
Area_5_office_25_KDTree.pkl 1.2 MB loaded in 0.0s
Area_5_office_26_KDTree.pkl 1.4 MB loaded in 0.0s
Area_5_office_27_KDTree.pkl 1.4 MB loaded in 0.0s
Area_5_office_28_KDTree.pkl 1.5 MB loaded in 0.0s
Area_5_office_29_KDTree.pkl 3.1 MB loaded in 0.0s
Area_5_office_30_KDTree.pkl 0.9 MB loaded in 0.0s
Area_5_office_31_KDTree.pkl 1.6 MB loaded in 0.0s
Area_5_office_32_KDTree.pkl 1.5 MB loaded in 0.0s
Area_5_office_33_KDTree.pkl 1.6 MB loaded in 0.0s
Area_5_office_34_KDTree.pkl 1.7 MB loaded in 0.0s
Area_5_office_35_KDTree.pkl 1.7 MB loaded in 0.0s
Area_5_office_36_KDTree.pkl 2.3 MB loaded in 0.0s
Area_5_office_37_KDTree.pkl 5.3 MB loaded in 0.3s
Area_5_office_38_KDTree.pkl 5.7 MB loaded in 0.3s
Area_5_office_39_KDTree.pkl 1.8 MB loaded in 0.1s
Area_5_office_4_KDTree.pkl 1.6 MB loaded in 0.1s
Area_5_office_40_KDTree.pkl 4.9 MB loaded in 0.2s
Area_5_office_41_KDTree.pkl 2.7 MB loaded in 0.1s
Area_5_office_42_KDTree.pkl 1.6 MB loaded in 0.0s
Area_5_office_5_KDTree.pkl 1.4 MB loaded in 0.0s
Area_5_office_6_KDTree.pkl 1.5 MB loaded in 0.0s
Area_5_office_8_KDTree.pkl 1.7 MB loaded in 0.0s
Area_5_office_9_KDTree.pkl 1.5 MB loaded in 0.0s
Area_5_pantry_1_KDTree.pkl 1.3 MB loaded in 0.0s
Area_5_storage_1_KDTree.pkl 1.3 MB loaded in 0.0s
Area_5_storage_2_KDTree.pkl 1.6 MB loaded in 0.0s
Area_5_storage_3_KDTree.pkl 0.4 MB loaded in 0.0s
Area_5_storage_4_KDTree.pkl 1.4 MB loaded in 0.0s
Area_5_WC_1_KDTree.pkl 1.3 MB loaded in 0.0s
Area_5_WC_2_KDTree.pkl 1.2 MB loaded in 0.0s
Area_6_conferenceRoom_1_KDTree.pkl 2.0 MB loaded in 0.0s
Area_6_copyRoom_1_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_hallway_1_KDTree.pkl 6.0 MB loaded in 0.0s
Area_6_hallway_2_KDTree.pkl 5.9 MB loaded in 0.0s
Area_6_hallway_3_KDTree.pkl 0.6 MB loaded in 0.0s
Area_6_hallway_4_KDTree.pkl 0.6 MB loaded in 0.0s
Area_6_hallway_6_KDTree.pkl 1.8 MB loaded in 0.0s
Area_6_lounge_1_KDTree.pkl 2.6 MB loaded in 0.0s
Area_6_office_1_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_10_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_11_KDTree.pkl 1.1 MB loaded in 0.0s
Area_6_office_12_KDTree.pkl 1.2 MB loaded in 0.0s
Area_6_office_13_KDTree.pkl 0.9 MB loaded in 0.0s
Area_6_office_14_KDTree.pkl 0.9 MB loaded in 0.0s
Area_6_office_15_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_office_16_KDTree.pkl 1.2 MB loaded in 0.0s
Area_6_office_17_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_office_18_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_office_19_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_office_2_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_20_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_office_22_KDTree.pkl 0.9 MB loaded in 0.0s
Area_6_office_23_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_24_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_25_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_26_KDTree.pkl 1.6 MB loaded in 0.0s
Area_6_office_27_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_28_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_29_KDTree.pkl 1.6 MB loaded in 0.0s
Area_6_office_3_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_30_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_31_KDTree.pkl 1.0 MB loaded in 0.0s
Area_6_office_32_KDTree.pkl 1.6 MB loaded in 0.0s
Area_6_office_33_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_34_KDTree.pkl 1.4 MB loaded in 0.0s
Area_6_office_35_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_36_KDTree.pkl 1.6 MB loaded in 0.0s
Area_6_office_37_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_4_KDTree.pkl 1.4 MB loaded in 0.0s
Area_6_office_5_KDTree.pkl 1.6 MB loaded in 0.0s
Area_6_office_6_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_7_KDTree.pkl 1.5 MB loaded in 0.0s
Area_6_office_8_KDTree.pkl 2.9 MB loaded in 0.0s
Area_6_office_9_KDTree.pkl 3.2 MB loaded in 0.0s
Area_6_openspace_1_KDTree.pkl 3.7 MB loaded in 0.0s
Area_6_pantry_1_KDTree.pkl 0.9 MB loaded in 0.0s

Preparing reprojected indices for testing
Area_1_conferenceRoom_1 done in 0.0s
Area_1_conferenceRoom_2 done in 0.0s
Area_1_copyRoom_1 done in 0.0s
Area_1_hallway_1 done in 0.0s
Area_1_hallway_2 done in 0.0s
Area_1_hallway_3 done in 0.0s
Area_1_hallway_4 done in 0.0s
Area_1_hallway_5 done in 0.0s
Area_1_hallway_6 done in 0.0s
Area_1_hallway_7 done in 0.0s
Area_1_hallway_8 done in 0.0s
Area_1_office_1 done in 0.0s
Area_1_office_10 done in 0.0s
Area_1_office_11 done in 0.0s
Area_1_office_12 done in 0.0s
Area_1_office_14 done in 0.0s
Area_1_office_15 done in 0.0s
Area_1_office_16 done in 0.0s
Area_1_office_17 done in 0.0s
Area_1_office_18 done in 0.0s
Area_1_office_19 done in 0.0s
Area_1_office_2 done in 0.0s
Area_1_office_20 done in 0.0s
Area_1_office_21 done in 0.0s
Area_1_office_22 done in 0.0s
Area_1_office_23 done in 0.0s
Area_1_office_24 done in 0.0s
Area_1_office_25 done in 0.0s
Area_1_office_26 done in 0.0s
Area_1_office_27 done in 0.0s
Area_1_office_28 done in 0.0s
Area_1_office_3 done in 0.0s
Area_1_office_30 done in 0.0s
Area_1_office_31 done in 0.0s
Area_1_office_4 done in 0.0s
Area_1_office_5 done in 0.0s
Area_1_office_6 done in 0.0s
Area_1_office_7 done in 0.0s
Area_1_office_8 done in 0.0s
Area_1_office_9 done in 0.0s
Area_1_pantry_1 done in 0.0s
Area_1_WC_1 done in 0.0s
Area_1_office_13 done in 0.0s
Area_1_office_29 done in 0.0s
Initiating input pipelines
WARNING:tensorflow:From /media/kfr/Elements SE/RandLA-Net/RandLANet.py:265: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

/home/kfr/anaconda3/envs/randlanet/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Traceback (most recent call last):
File "main_S3DIS.py", line 243, in
if FLAGS.mode_path is not 'None':
AttributeError: 'Namespace' object has no attribute 'mode_path'
`
不知道这是什么原因,麻烦看一下,感谢。

The visualization demo

Your visualization demos look very attractive. I wonder how to combine .ply segmentation predictions of different blocks in S3DIS into various areas (Area1, Area2...etc) and how to use blender to make the video demo?

Problems with running main_Semantic3D.py

Hi, I managed to prepare the data and wanted to train the network with main_Semantic3D.py, but this error comes:

ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

Is it a problem with numpy? my version is 1.16.0. Should I up- or downgrade it perhaps?
Thanks!

dataset.min_possibility

@QingyongHu Hi, I have two quesions about your work.

  1. What does min_possibility mean at testing phase for semantickitti?

  2. I follow your step to install the dependce and use your pretrained model for semantickitti testing, but the number of points in the label file is different from the original point cloud file. so the visualization function throw error like:

Traceback (most recent call last):
  File "kitti_visualization.py", line 23, in <module>
    Plot.draw_pc_sem_ins(points , labels)
  File "/projects/RandLA-Net\helper_tool.py", line 310, in draw_pc_sem_ins
    valid_xyz = pc_xyz[valid_ind]
IndexError: index 89267 is out of bounds for axis 0 with size 87936

Why the number is different ?

Waiting for your reply. Thank you.

InternalError (see above for traceback): Blas xGEMM launch failed

Hi,thanks for your great work!when i want to train your model,i meet this problem,can you help me ?looking forward to your reply.
InternalError (see above for traceback): Blas xGEMM launch failed : a.shape=[1,1024,3], b.shape=[1,3,2048], m=1024, n=2048, k=3
[[Node: tower_1/layer1/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](tower_1/layer1/ExpandDims_1, tower_1/layer1/transpose)]]
[[Node: tower_1/fa_layer3/conv_0/bn/cond/add_1/_1179 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3471_tower_1/fa_layer3/conv_0/bn/cond/add_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

关于数据scale问题

您好,我想请问一下 ,该算法能处理点云数据的最大量级是多少呢?是10^6吗?实际中,我的点云数据量有上亿,训练之前我是否应该把数据抽稀成百万级别的?期待您的回答!感谢~

Pre trained model and Inference

@morpheus1820 @Yang7879 @QingyongHu Hi thanks for opensourcing the source code , i have following queries

  1. Can you share the pre-trained model for semantic kitti dataset i want to check the output on the custom dataset
    2.how to obtain the list of classes predicted from the model from the inference code
    3.can we get the location of the predicted objects or draw cuboids around them
    THanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.