GithubHelp home page GithubHelp logo

octformer's People

Contributors

wang-ps avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

octformer's Issues

ScanNet Segmentation

Hello, I would like to ask whether when downloading the ScanNet Segmentation dataset, do I need to download the complete dataset of 1.3TB, or is it possible to download only a part of it?

Multiple GPUs Error

Parameter indices which did not receive grad for rank 1: 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 142 143 144 145 146 147 148 149 150 151 ...

About sub-figures (d) and (g) of Fig3

Hello, the sorting rule I got based on the dilation setting in Algorithm 1 is inconsistent with d and g in Figure 3.
Does sorting with the dilation setting operate on one dimension, or two?

Visualization of ScanNet

Hi. Thank you for your work.
Is it possible to run one scan of ScanNet with pretrained weight, and generate a visualization result like Fig. 5 in your paper?

AssertionError: The shape of input data is wrong in OctFormerCls model

I am encountering an issue when trying to use the OctFormerCls model.
I forked your repository and added a test notebook. In this notebook I am trying to use your OctFormerCls model with one of your data (airplane001.off).

Parameters Used

  • channel : 4
  • octree depth : 6
  • feature : ND

I built a fake batch with this data, a random label and this octree.

Error

When I run the script I have this error :

AssertionError: The shape of input data is wrong.

Notebook Permalink

https://github.com/FlorianDAVAUX/octformer/blob/b41c63f412148bcfb64a724b1144574d8944c058/test.ipynb

link to pre-trained weights

The link to onedrive folder with pre-trained weight is not working anymore.
Could this be updated? Thanks in advance

Code

Thanks for your nice work. Looking forward to your codes~

tools/seg_scannet200.py

Hi,thanks for your great work. When I run the command:
python tools/seg_scannet200.py --run process_scannet --path_in <scannet_folder> \ --path_out data/scanet200.npz --align_axis --scannet200
It seems that there is no seg_scannet200.py file. I guess it should be " tools/seg_scannet.py " in the command.

backward

请问您是否有尝试在网络中反向传播,我使用分割网络试图反向传播的时候出现错误RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn,请问您有解决的办法吗

test

您好,我想在scannet的test上测试后将得到的结果提交给scannet官网,我看到似乎是需要在所有的scannet数据(包括train和val)上训练后,再在test上进行预测,那请问在产生的所有10-600model.pth,怎么样选择bestmodel进行预测,或者有一些什么其他需要注意的地方,十分期待您的回答。

Dataset

Hello sir,do we need to download the full dataset(1.3T)or other smaller dataset to train?

Visualization

Hello,
I would like to know how to visualize sparse voxels, do you have any suggestions? Thanks.

Point Classification

Dear wang-ps,

Hello! I'm a researcher interested in point cloud segmentation tasks. I'm particularly intrigued by your open-source project OctFormer and its performance in point cloud classification tasks. I'm eager to learn more about OctFormer's performance metrics and experimental results in this area. Are there any relevant literature or experimental data you could share? I'm looking forward to your response. Thank you!

Best regards,
Liu-yt

Check failed: error == cudaSuccess invalid configuration argument

Thank you for your great work, I have successfully trained and tested on the Scannet dataset using octformer, but an error occurs when training on my own dataset, and despite looking up various sources, I can't determine the cause of the error, so I'd like to ask you which part of the code this might be a problem with?

############################################################
Logdir: logs/railway3d/octformer_railway3d

  • Epoch: 1, train/loss: 2.418, train/accu: 0.083, iter: 0
    -> Epoch: 1, train/loss: 2.408, train/accu: 0.098, time/data: 72.928, time/batc
    h: 79.334, time: 2023/06/22 23:17:55, duration: 555.40s
  • Epoch: 2, train/loss: 2.414, train/accu: 0.085, iter: 0
    -> Epoch: 2, train/loss: 2.397, train/accu: 0.105, time/data: 42.071, time/batc
    h: 44.832, time: 2023/06/22 23:23:09, duration: 313.83s
  • Epoch: 3, train/loss: 2.361, train/accu: 0.117, iter: 0
    -> Epoch: 3, train/loss: 2.354, train/accu: 0.157, time/data: 52.162, time/batc
    h: 54.882, time: 2023/06/22 23:29:33, duration: 384.17s
  • Epoch: 4, train/loss: 2.388, train/accu: 0.145, iter: 0
    -> Epoch: 4, train/loss: 2.285, train/accu: 0.277, time/data: 57.457, time/batc
    h: 60.386, time: 2023/06/22 23:36:36, duration: 422.70s
  • Epoch: 5, train/loss: 2.284, train/accu: 0.468, iter: 0
    -> Epoch: 5, train/loss: 2.105, train/accu: 0.534, time/data: 55.314, time/batc
    h: 58.113, time: 2023/06/22 23:43:22, duration: 406.79s
    1%|▎ | 4/600 [34:42<67:42:25, 408.97s/it]
    [F dwconv.cu:111] Check failed: error == cudaSuccess invalid configuration argument
    Aborted (core dumped) | 1/12 [00:00<00:10, 1.03it/s]

############################################################

Runtime error: cuda error: invalid configuration argument

The information I've found suggests that this error is usually caused by using an invalid CUDA configuration argument. There are several common troubleshooting options:

  1. Make sure the CUDA environment is correct: Check if the CUDA installation is correct and compatible with the code version. If the CUDA version does not match, then it is likely that an invalid configuration parameter error will occur.

  2. Check CUDA thread block, thread and shared memory size settings: If the program uses CUDA kernel functions, you need to check whether the CUDA thread block, thread and shared memory size settings are correct. Make sure that the size set does not exceed the maximum limit of the GPU device, otherwise invalid configuration parameter errors will occur.

  3. Check CUDA device number and ID settings: If the program uses multiple CUDA devices, you need to check if the device number and ID settings are correct. Make sure that the program is executed on the correct GPU device, otherwise invalid configuration parameter errors will occur.

  4. Check CUDA memory usage: If the program uses a lot of GPU memory resources, you need to check if the memory usage is correct. Invalid configuration parameter errors may also result if memory is not used correctly.

I would like to know which part of the code in octformer can be checked for these cases?
Looking forward to getting your response!

AssertionError

Thank you very much for your excellent work,but I encountered an assertions error while running python scripts/run_seg_scannet.py --gpu 0,1 --alias scannet --port 10001.I don't know where the problem lies, I hope you can give me some guidance. Thank you very much!
image

RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

Hi, when running the code it will report an error at the line out[start:end] = torch.mm(buffer.flatten(1, 2), weights.flatten(0, 1)), after checking it's a cuda problem. What version of nvcc or cuda toolkit were you using at the time and can you give me a reference, thanks!

Traceback (most recent call last):
  File "/home/zxy/lab/code/octformer/segmentation.py", line 197, in <module>
    SegSolver.main()
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/thsolver/solver.py", line 433, in main
    cls.worker(0, FLAGS)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/thsolver/solver.py", line 422, in worker
    the_solver.run()
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/thsolver/solver.py", line 397, in run
    eval('self.%s()' % self.FLAGS.SOLVER.run)
  File "<string>", line 1, in <module>
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/thsolver/solver.py", line 320, in train
    self.train_epoch(epoch)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/thsolver/solver.py", line 156, in train_epoch
    output = self.train_step(batch)
  File "/home/zxy/lab/code/octformer/segmentation.py", line 94, in train_step
    logit, label = self.model_forward(batch)
  File "/home/zxy/lab/code/octformer/segmentation.py", line 72, in model_forward
    logit = self.model(data, octree, octree.depth, query_pts)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zxy/lab/code/octformer/models/octformerseg.py", line 93, in forward
    features = self.backbone(data, octree, depth) #传入点的feature,八叉树,以及深度
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zxy/lab/code/octformer/models/octformer.py", line 386, in forward
    data = self.patch_embed(data, octree, depth)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zxy/lab/code/octformer/models/octformer.py", line 338, in forward
    data = self.convs[i](data, octree, depth_i)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/ocnn/modules/modules.py", line 72, in forward
    out = self.conv(data, octree, depth)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/ocnn/nn/octree_conv.py", line 357, in forward
    out = octree_conv(
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/ocnn/nn/octree_conv.py", line 225, in forward
    out = octree_conv.forward_gemm(out, data, weights)
  File "/home/zxy/miniforge3/envs/oct/lib/python3.8/site-packages/ocnn/nn/octree_conv.py", line 133, in forward_gemm
    out[start:end] = torch.mm(buffer.flatten(1, 2), weights.flatten(0, 1))
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

val set

您好,我看到您说在test集合的结果是同时使用train和val的数据集进行训练,那请问训练的时候验证集应该怎么设置呢,十分感谢

Visualization

Hello sir, how can I visualize the trained point cloud model like Figure 1 in your paper? I did not conduct a complete training (due to insufficient GPU memory), but conducted a complete evaluation.Looking forward to your reply.

tensor

您好,我想在自己的数据集上进行分割训练,但是我读入的point数据为tensor类型,请问如何将tensor类型转换为ocnn要求的ocnn.octree.point.Points类型

mIoU

Hello, may I inquire if the mIoU metric mentioned in the paper refers to the test/mIoU generated during training or the test/mIoU_part?

test

您好,我想问一下关于test的数据是需要test之后提交到官网得到最终的miou,但是怎么得到test出来的结果呢,似乎没有找到相关命令,谢谢您

dwconv

您好,我在进行dwconv库的操作时,好像出现了维度不匹配的问题。
微信图片_20240326181542
这是我的报错信息,谢谢

own dataset train

Hi, I'm working with my own dataset (in keeping with the sunrgbd dataset format) and I'm noticing a strange thing, everything except cls_loss is 0. It seems that the voxel_size and octree_depth and self.scale_factor = 2 / (2**octree_depth * voxel_size ) These hyperparameters don't match? Sorry I can't understand it well, thank you very much for your help, all the best!

Exclude Normals from Training

Hello,

thank you for your nice work! I am using your Model for a Training with my own Dataset. But i dont have any Informationen in my dataset about the normals.

How can i exclude the normals in the training.

When i set read_file = ReadFile(has_normal=False, has_color=True, has_label=True) in scannet.py line 222 to False i receive a key Error saying KeyError: 'normals'

Thank your for your help

Best
Moritz

About the use of OctrideMaxPool and OctrideMaxUnpool

Hello, thank you very much for the author's excellent work. I am very interested in it,
When using octree_max_unpool, passing depth+1 in this part of the code will result in a mask exceeding the index of the data when entering octreed_depad, which is an error message.
image
1
I will print the shape in octree_depad as shown in the picture.
image (2)
3
If I only pass in depth without+1, then there is no such issue, but I am not sure if this is correct?
image (4)
The following is the code I used in the code for using OctreeMaxPool and OctreeMaxUnpool
image (5)
image (6)

how to train s3dis

Can this network be trained on the s3dis dataset and what pre-processing steps are required

OctFormer performs worse with deeper Octree

Thank you for this wonderful contribution.

I'm currently attempting to train OctFormer for the downstream task of lidar place recognition on a dataset not used in your repo (Oxford RobotCar dataset). I have kept the network layout of OctFormer essentially the same, but added a feature aggregator at the end of the network, and have tried both using the output features of the final stage, or in a FPN structure as you used for semantic segmentation. I have managed to get OctFormer performing reasonably well at this task with both these configurations, but I have found that performance degrades rapidly when using an input Octree depth of 7 or higher. This doesn't make sense to me, as in my mind, a deeper Octree corresponds to a higher resolution input.

I was wondering if you may have any idea what could cause this performance drop? The dataset in question is somewhat unique, as I am using a pre-processed benchmark that has already been downsampled to 4096 points and normalised to [-1, 1] range (I know using only 4096 points defeats the purpose of OctFormer's efficiency, but I must start with this benchmark as it is the most common for this task). The original dataset has ~22,000 outdoor point clouds in the training set with typical max dimensions of [-30m, 30m]. The downsampling done in pre-processing uses an average voxel size of 0.3m, so my assumption is that an Octree depth of 8 should sufficiently represent this resolution (60m / 0.3m = 200 voxels per dimension, and 2^8 = 256 octants per dimension). This is on par with CNN-based methods which typically quantize these point clouds to 200 voxels per dimension.

I believe I am building the Octrees correctly, as I am first performing data augmentation (random rotation, scaling, cropping, etc), then clipping point clouds to [-1, 1], then constructing the Octree and calling octree.build_octree(), then collating batches with ocnn.octree.merge_octrees() and finally calling octree.construct_all_neigh(). The dataset only contains point position information, so for input features I am using the 'P' option for global position in ocnn.modules.InputFeature().

Is there any common reason why deeper Octrees would be performing worse? I could understand that very deep Octrees may degrade performance if going deeper doesn't increase the effective resolution any further, however I have found that the number of non empty octants after creating an Octree on this dataset typically increases until depth 8/9 and then plateaus and hardly increases with deeper Octrees, which is as expected from my earlier calculations.

For reference, here are the training loss curves and recall@1 on the test set with different Octree depths.

Thanks.

image
image

conv_error

image
您好,我在使用octree提取特征时,发生了如上错误。具体是在执行Patchembed的八叉树卷积时出现了shape_error。
请问我该如何解决,谢谢。

OctreeAttention Behavior with Patch Partitioning During Training

Hi Professor Wang,

When the model is training, assuming batch_size = 32, would it be possible for OctreeAttention to partition a patch at a specified depth using nodes from two adjacent batches? If this situation occurs, how should the self-attention be interpreted in this case?

I would greatly appreciate your insights on this matter. I look forward to hearing from you at your earliest convenience.

Request to add instructions to work with a new dataset

Hi,
I ran into this work recently and really liked the writing and visuals. Great to see the reproducibility stamp too on the codebase.
We've been trying to use OctFormer on a dataset for 3d medical segmentation. So far I am trying to scratch my way around to have the dataset as close to scannet which could work. But, it would be great to the community if generic instructions were added to be able to work with new datasets. Kindly take it under consideration.

Thanks,
Best.

voxel size

您好,我看到您的论文中说在voxel size为1cm的时候模型有较好表现,请问代码中那里可以调整voxel size为1cm呢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.