GithubHelp home page GithubHelp logo

torch-points3d / torch-points3d Goto Github PK

View Code? Open in Web Editor NEW
2.4K 55.0 379.0 91.67 MB

Pytorch framework for doing deep learning on point clouds.

Home Page: https://torch-points3d.readthedocs.io/en/latest/

License: Other

Python 80.98% Shell 0.36% Dockerfile 0.07% Jupyter Notebook 18.59% Makefile 0.01%
pytorch deep-learning point-cloud pointnet minkowskiengine kpconv segmentation s3dis scannet

torch-points3d's Introduction

PyPI version codecov Actions Status Documentation Status slack

This is a framework for running common deep learning models for point cloud analysis tasks against classic benchmark. It heavily relies on Pytorch Geometric and Facebook Hydra.

The framework allows lean and yet complex model to be built with minimum effort and great reproducibility. It also provide a high level API to democratize deep learning on pointclouds. See our paper at 3DV for an overview of the framework capacities and benchmarks of state-of-the-art networks.

Overview

Requirements

  • CUDA 10 or higher (if you want GPU version)
  • Python 3.7 or higher + headers (python-dev)
  • PyTorch 1.8.1 or higher (PyTorch >= 1.9 is recommended)
  • A Sparse convolution backend (optional) see here for installation instructions

Install with

pip install torch
pip install torch-points3d

Project structure

├─ benchmark               # Output from various benchmark runs
├─ conf                    # All configurations for training nad evaluation leave there
├─ notebooks               # A collection of notebooks that allow result exploration and network debugging
├─ docker                  # Docker image that can be used for inference or training
├─ docs                    # All the doc
├─ eval.py                 # Eval script
├─ find_neighbour_dist.py  # Script to find optimal #neighbours within neighbour search operations
├─ forward_scripts         # Script that runs a forward pass on possibly non annotated data
├─ outputs                 # All outputs from your runs sorted by date
├─ scripts                 # Some scripts to help manage the project
├─ torch_points3d
    ├─ core                # Core components
    ├─ datasets            # All code related to datasets
    ├─ metrics             # All metrics and trackers
    ├─ models              # All models
    ├─ modules             # Basic modules that can be used in a modular way
    ├─ utils               # Various utils
    └─ visualization       # Visualization
├─ test
└─ train.py                # Main script to launch a training

As a general philosophy we have split datasets and models by task. For example, datasets has five subfolders:

  • segmentation
  • classification
  • registration
  • object_detection
  • panoptic

where each folder contains the dataset related to each task.

Methods currently implemented

Please refer to our documentation for accessing some of those models directly from the API and see our example notebooks for KPconv and RSConv for more details.

Available Tasks

Tasks

Examples

Classification / Part Segmentation


Segmentation


Object Detection

Panoptic Segmentation

Registration

Available datasets

Segmentation

* S3DIS 1x1
* S3DIS Room
* S3DIS Fused - Sphere | Cylinder

Object detection and panoptic

* S3DIS Fused - Sphere | Cylinder

Registration

Classification

3D Sparse convolution support

We currently support Minkowski Engine > v0.5 and torchsparse >= v1.4.0 as backends for sparse convolutions. Those packages need to be installed independently from Torch Points3d, please follow installation instructions and troubleshooting notes on the respective repositories. At the moment MinkowskiEngine see here (thank you Chris Choy) demonstrates faster training. Please be aware that torchsparse is still in beta and does not support CPU only training.

Once you have setup one of those two sparse convolution framework you can start using are high level to define a unet backbone or simply an encoder:

from torch_points3d.applications.sparseconv3d import SparseConv3d

model = SparseConv3d("unet", input_nc=3, output_nc=5, num_layers=4, backend="torchsparse") # minkowski by default

You can also assemble your own networks by using the modules provided in torch_points3d/modules/SparseConv3d/nn. For example if you wish to use torchsparse backend you can do the following:

import torch_points3d.modules.SparseConv3d as sp3d

sp3d.nn.set_backend("torchsparse")
conv = sp3d.nn.Conv3d(10, 10)
bn = sp3d.nn.BatchNorm(10)

Mixed Precision Training

Mixed precision allows for lower memory on the GPU and slightly faster training times by performing the sparse convolution, pooling, and gradient ops in float16. Mixed precision training is currently supported for CUDA training on SparseConv3d networks with the torchsparse backend. To enable mixed precision, ensure you have the latest version of torchsparse with pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git. Then, set training.enable_mixed=True in your training configuration files. If all the conditions are met, when you start training you will see a log entry stating:

[torch_points3d.models.base_model][INFO] - Model will use mixed precision

If, however, you try to use mixed precision training with an unsupported backend, you will see:

[torch_points3d.models.base_model][WARNING] - Mixed precision is not supported on this model, using default precision...

Adding your model to the PretrainedRegistry.

The PretrainedRegistry enables anyone to add their own pre-trained models and re-create them with only 2 lines of code for finetunning or production purposes.

  • [You] Launch your model training with Wandb activated (wandb.log=True)
  • [TorchPoints3d] Once the training finished, TorchPoints3d will upload your trained model within our custom checkpoint to your wandb.
  • [You] Within PretainedRegistry class, add a key-value pair within its attribute MODELS. The key should be describe your model, dataset and training hyper-parameters (possibly the best model), the value should be the url referencing the .pt file on your wandb.

Example: Key: pointnet2_largemsg-s3dis-1 and URL value: https://api.wandb.ai/files/loicland/benchmark-torch-points-3d-s3dis/1e1p0csk/pointnet2_largemsg.pt for the pointnet2_largemsg.pt file. The key desribes a pointnet2 largemsg trained on s3dis fold 1.

  • [Anyone] By using the PretainedRegistry class and by providing the key, the associated model weights will be downloaded and the pre-trained model will be ready to use with its transforms.
[In]:
from torch_points3d.applications.pretrained_api import PretainedRegistry

model = PretainedRegistry.from_pretrained("pointnet2_largemsg-s3dis-1")

print(model.wandb)
print(model.print_transforms())

[Out]:
=================================================== WANDB URLS ======================================================
WEIGHT_URL: https://api.wandb.ai/files/loicland/benchmark-torch-points-3d-s3dis/1e1p0csk/pointnet2_largemsg.pt
LOG_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/logs
CHART_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk
OVERVIEW_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/overview
HYDRA_CONFIG_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/files/hydra-config.yaml
OVERRIDES_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/files/overrides.yaml
======================================================================================================================

pre_transform = None
test_transform = Compose([
    FixedPoints(20000, replace=True),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
train_transform = Compose([
    FixedPoints(20000, replace=True),
    RandomNoise(sigma=0.001, clip=0.05),
    RandomRotate((-180, 180), axis=2),
    RandomScaleAnisotropic([0.8, 1.2]),
    RandomAxesSymmetry(x=True, y=False, z=False),
    DropFeature(proba=0.2, feature='rgb'),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
val_transform = Compose([
    FixedPoints(20000, replace=True),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
inference_transform = Compose([
    FixedPoints(20000, replace=True),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
pre_collate_transform = Compose([
    PointCloudFusion(),
    SaveOriginalPosId,
    GridSampling3D(grid_size=0.04, quantize_coords=False, mode=mean),
])

Developer guidelines

Setting repo

We use Poetry for managing our packages. In order to get started, clone this repositories and run the following command from the root of the repo

poetry install --no-root

This will install all required dependencies in a new virtual environment.

Activate the environment

poetry shell

You can check that the install has been successful by running

python -m unittest -v

For pycuda support (only needed for the registration tasks):

pip install pycuda

Getting started: Train pointnet++ on part segmentation task for dataset shapenet

poetry run python train.py task=segmentation models=segmentation/pointnet2 model_name=pointnet2_charlesssg data=segmentation/shapenet-fixed

And you should see something like that

logging

The config for pointnet++ is a good example of how to define a model and is as follow:

# PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space (https://arxiv.org/abs/1706.02413)
# Credit Charles R. Qi: https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_part_seg_msg_one_hot.py

pointnet2_onehot:
  architecture: pointnet2.PointNet2_D
  conv_type: "DENSE"
  use_category: True
  down_conv:
    module_name: PointNetMSGDown
    npoint: [1024, 256, 64, 16]
    radii: [[0.05, 0.1], [0.1, 0.2], [0.2, 0.4], [0.4, 0.8]]
    nsamples: [[16, 32], [16, 32], [16, 32], [16, 32]]
    down_conv_nn:
      [
        [[FEAT, 16, 16, 32], [FEAT, 32, 32, 64]],
        [[32 + 64, 64, 64, 128], [32 + 64, 64, 96, 128]],
        [[128 + 128, 128, 196, 256], [128 + 128, 128, 196, 256]],
        [[256 + 256, 256, 256, 512], [256 + 256, 256, 384, 512]],
      ]
  up_conv:
    module_name: DenseFPModule
    up_conv_nn:
      [
        [512 + 512 + 256 + 256, 512, 512],
        [512 + 128 + 128, 512, 512],
        [512 + 64 + 32, 256, 256],
        [256 + FEAT, 128, 128],
      ]
    skip: True
  mlp_cls:
    nn: [128, 128]
    dropout: 0.5

Inference

Inference script

We provide a script for running a given pre trained model on custom data that may not be annotated. You will find an example of this for the part segmentation task on Shapenet. Just like for the rest of the codebase most of the customization happens through config files and the provided example can be extended to other datasets. You can also easily create your own from there. Going back to the part segmentation task, say you have a folder full of point clouds that you know are Airplanes, and you have the checkpoint of a model trained on Airplanes and potentially other classes, simply edit the config.yaml and shapenet.yaml and run the following command:

python forward_scripts/forward.py

The result of the forward run will be placed in the specified output_folder and you can use the notebook provided to explore the results. Below is an example of the outcome of using a model trained on caps only to find the parts of airplanes and caps.

resexplore

Containerizing your model with Docker

Finally, for people interested in deploying their models to production environments, we provide a Dockerfile as well as a build script. Say you have trained a network for semantic segmentation that gave the weight <outputfolder/weights.pt>, the following command will build a docker image for you:

cd docker
./build.sh outputfolder/weights.pt

You can then use it to run a forward pass on a all the point clouds in input_path and generate the results in output_path

docker run -v /test_data:/in -v /test_data/out:/out pointnet2_charlesssg:latest python3 forward_scripts/forward.py dataset=shapenet data.forward_category=Cap input_path="/in" output_path="/out"

The -v option mounts a local directory to the container's file system. For example in the command line above, /test_data/out will be mounted at the location /out. As a consequence, all files written in /out will be available in the folder /test_data/out on your machine.

Profiling

We advice to use snakeviz and cProfile

Use cProfile to profile your code

poetry run python -m cProfile -o {your_name}.prof train.py ... debugging.profiling=True

And visualize results using snakeviz.

snakeviz {your_name}.prof

It is also possible to use torch.utils.bottleneck

python -m torch.utils.bottleneck /path/to/source/script.py [args]

Troubleshooting

Cannot compile certain CUDA Kernels or seg faults while running the tests

Ensure that at least PyTorch 1.8.0 is installed and verify that cuda/bin and cuda/include are in your $PATH and $CPATH respectively, e.g.:

$ python -c "import torch; print(torch.__version__)"
>>> 1.8.0

$ echo $PATH
>>> /usr/local/cuda/bin:...

$ echo $CPATH
>>> /usr/local/cuda/include:...

Undefined symbol / Updating Pytorch

When we update the version of Pytorch that is used, the compiled packages need to be reinstalled, otherwise you will run into an error that looks like this:

... scatter_cpu.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c1012CUDATensorIdEv

This can happen for the following libraries:

  • torch-points-kernels
  • torch-scatter
  • torch-cluster
  • torch-sparse

An easy way to fix this is to run the following command with the virtual env activated:

pip uninstall torch-scatter torch-sparse torch-cluster torch-points-kernels -y
rm -rf ~/.cache/pip
poetry install

CUDA kernel failed : no kernel image is available for execution on the device

This can happen when trying to run the code on a different GPU than the one used to compile the torch-points-kernels library. Uninstall torch-points-kernels, clear cache, and reinstall after setting the TORCH_CUDA_ARCH_LIST environment variable. For example, for compiling with a Tesla T4 (Turing 7.5) and running the code on a Tesla V100 (Volta 7.0) use:

export TORCH_CUDA_ARCH_LIST="7.0;7.5"

See this useful chart for more architecture compatibility.

Cannot use wandb on Windows

Raises OSError: [WinError 6] The handle is invalid / wandb: ERROR W&B process failed to launch Wandb is currently broken on Windows (see this issue), a workaround is to use the command line argument wandb.log=false

Exploring your experiments

We provide a notebook based pyvista and panel that allows you to explore your past experiments visually. When using jupyter lab you will have to install an extension:

jupyter labextension install @pyviz/jupyterlab_pyviz

Run through the notebook and you should see a dashboard starting that looks like the following:

dashboard

Contributing

Contributions are welcome! The only asks are that you stick to the styling and that you add tests as you add more features!

For styling you can use pre-commit hooks to help you:

pre-commit install

A sequence of checks will be run for you and you may have to add the fixed files again to the stashed files.

When it comes to docstrings we use numpy style docstrings, for those who use Visual Studio Code, there is a great extension that can help with that. Install it and set the format to numpy and you should be good to go!

Finaly, if you want to have a direct chat with us feel free to join our slack, just shoot us an email and we'll add you.

Citing

If you find our work useful, do not hesitate to cite it:

@inproceedings{
  tp3d,
  title={Torch-Points3D: A Modular Multi-Task Frameworkfor Reproducible Deep Learning on 3D Point Clouds},
  author={Chaton, Thomas and Chaulet Nicolas and Horache, Sofiane and Landrieu, Loic},
  booktitle={2020 International Conference on 3D Vision (3DV)},
  year={2020},
  organization={IEEE},
  url = {\url{https://github.com/nicolas-chaulet/torch-points3d}}
}

and please also include a citation to the models or the datasets you have used in your experiments!

torch-points3d's People

Contributors

3llobo avatar ccinc avatar chaitjo avatar daili650 avatar dependabot[bot] avatar fengziyue avatar gabrieleangeletti avatar ggrzeczkowicz avatar guochengqian avatar harrydobbs avatar humanpose1 avatar jloveu avatar loicland avatar mitchellwest avatar nicolas-chaulet avatar pre-commit-ci[bot] avatar ptoews avatar rancheng avatar saedrna avatar simone-fontana avatar stakhan avatar tchaton avatar tristanheywood avatar uakh avatar wundersam avatar yhijioka avatar zeliu98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torch-points3d's Issues

SOTA for S3DIS / pointnet++

How to define SOTA:

  1. Clone original repo for a given task and dataset
  2. Train for 100 epochs -> how much do we get?
  3. Train 100 in our benchmark code.

SOTA for S3DIS / RSConv

How to define SOTA:

  1. Clone original repo for a given task and dataset
  2. Train for 100 epochs -> how much do we get?
  3. Train 100 in our benchmark code.

Add kubeflow pipeline scripts

Add support for Kubeflow as a pipeline for running a training with some customisable parameters. The pipeline will be specific to a given gcloud project and should be described as being an example implementation. If other people want to use it they would need to setup Kubeflow on their cloud provider.

Data.x

Right now, the way the features are added is very dataset-dependent (in shapenet, we add the normals.
here
, in s3dis we add the colors and the elevation here.

For KPConv, we add a column of ones directly in the set_input here

It seems a bit unorganized...
Like transform or pre_transform, It would be great to create classes, that add features we want directly on our dataset.

class MyFeature:
    r"""add a great feature
    Args:
    param: parameter for our feature
    """
    def __init__(self, param):
        self.param = param
    def __call__(self, data):
        # replace add_feature by anything you want
        if(data.x is not None):
            data.x = torch.cat([data.x, add_feature(param)], axis=-1)
        else:
            data.x = add_feature(param)
        return data

So in the yaml, we just need to specify the features we want to add.
The advantage is that we can choose if we want to test S3DIS (or shapenet) with normals or with colors or both or none of them.
Moreover, if we want to add new handcrafted features (curvature, planarity, sphericity...), It would be easier.

What do you think about it ?

Improve placeholders and variables in the model config

Right now the only placeholder supported is 'FEAT', the number of input features. This placeholder needs to be resolved by each model separately.

It would be better to resolve all placeholders as part of the core of the framework, so that model implementations don't need to worry about it. We could also allow users to do arithmetic in any of the fields (.e.g nn: [64, 64*2]).

We could also have a 'define_placeholders' section of the config where users can create their own placeholders. E.g.

MyModel:
    type: RSConv
    define_placeholders:
        L1_OUT: 64
    down_conv:
         nn: [[FEAT, 32, L1_OUT], [L1_OUT, L1_OUT*2, 64]

problem in compute_sparse_delta

I think there is a problem here. Indeed, we are computing the difference between raw_pos(which is xyz coordinate) and data.pos which is now voxel coordinate. Therefore, the operation is not homogeneous.

Tools for creating custom datasets

Provide a set of utils and tools to help people test the models on their own custom pointcloud datasets.

We can do this by creating a base PointCloud class that people can inherit from. For example I could create AHNPointCloud(PointCloud) which represents a single file from the AHN aerial pointcloud dataset. I'm thinking each derived class should have methods to init itself from a numpy structured array, and write itself back to a numpy structured array.

Then the base class can use pdal to create structured arrays from any pointcloud file, and create any pointcloud file from a structured array. So say I want to add a density field to my AHNPointCloud, I put the density calculation in init_from_recarray and then when to_recarray is called the density will be one of the columns and the base class will handle writing this recarray to a .laz or .e57 file, so you can easily view the new field in cloudcompare.

We can also use recarrays to store the pointclouds as .npy files which will make dataloading faster. And you can easily create a pandas dataframe from a recarray so we can provide utils to calculate statistics on the features

Add explanation capabilities

Captum is a model interpretability and understanding library for PyTorch. Captum means comprehension in latin and contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models. It has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others.

https://github.com/pytorch/captum

Add pruning capabilities

Distiller is an open-source Python package for neural network compression research.

Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.

https://github.com/NervanaSystems/distiller

SOTA for S3DIS / Randlanet

How to define SOTA:

  1. Clone original repo for a given task and dataset
  2. Train for 100 epochs -> how much do we get?
  3. Train 100 in our benchmark code.

PointCNN segmentation example

Hi, thanks for the great library!
Would it be possible for you to add segmentation example using PointCNN architecture.
Thanks in advance.
Sayak

Bug with message passing models

Models based on message passing fail in the radius search for some strange reason:

/pytorch/aten/src/THC/THCTensorScatterGather.cu:160: void THCudaTensor_scatterAddKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = long, Dims = 1]: block: [0,0,0], thread: [319,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
  0%|                                                                                                                                                                          | 0/7004 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "train.py", line 156, in <module>
    main()
  File "/home/tristan/.conda/envs/pdalenv/lib/python3.6/site-packages/hydra/main.py", line 24, in decorated_main
    strict=strict,
 <...>
  File "/home/tristan/deeppointcloud-benchmarks/src/core/base_conv/message_passing.py", line 64, in forward
    row, col = self.neighbour_finder(pos, pos[idx], batch_x=batch, batch_y=batch[idx])
  File "/home/tristan/deeppointcloud-benchmarks/src/core/neighbourfinder/neighbour_finder.py", line 16, in __call__
    return self.find_neighbours(x, y, batch_x, batch_y)
  File "/home/tristan/deeppointcloud-benchmarks/src/core/neighbourfinder/neighbour_finder.py", line 33, in find_neighbours
    return radius(x, y, self._radius, batch_x, batch_y, max_num_neighbors=self._max_num_neighbors)
  File "/home/tristan/.conda/envs/pdalenv/lib/python3.6/site-packages/torch_cluster/radius.py", line 61, in radius
    max_num_neighbors)
RuntimeError: scan failed to synchronize: device-side assert triggered

add new task : learning descriptor for point cloud registration

It would be great to also have a benchmark for the descriptors. So that we could compare the different convolutions on this task (KPConv, RSConv, pointnet++....).
Dataset : 3DMatch

  1. we have to download the dataset (it is RGBD frames)
  2. create fragments
  3. find pair of similar patches from different fragments
  4. (optionally some preprocessing)
  5. siamese network to learn descriptors on patches

An other option is to learn descriptor on the fragment itself (like FCGF like a segmentation network) and not on patches.

Differences with original RandLA-Net.

Hi, thank you for making this framework available!

I am interested in the RandLA-Net implementation.

I see you have two variants, namely Randlanet_Conv and Randlanet_Res.
I was wondering what are all the differences of the two compared to the unique original implementation, found here https://github.com/QingyongHu/RandLA-Net corresponding to the paper https://arxiv.org/abs/1911.11236 (where the network architecture is detailed in the appendix).

Could you please describe all the differences?

Thank you very much!

Add ConvPoint

It would be great to also add ConvPoint in the framework.
There is an implementation here by the author in Pytorch so it's not complicated to add.

How to use KPConv as a feature extractor?

Hello Team,

Thanks for your wonderful work. I have work that needs per-point feature and I decided to use KPConv as our backbone. With your framework, it is easy to write a segmentation network, due to the encapsulation. But I want to only use the features before MLP is applied. How can I achieve this?

Thanks!

run "poetry run python train.py task=segmentation model_type=pointnet2 model_name=pointnet2_charlesssg dataset=shapenet" get "Sizes of tensors must match except in dimension 0." error in code batch.py

hi alls :
sorry to bother you~
I follow the process in readme, everything is ok, but when i run poetry run python train.py task=segmentation model_type=pointnet2 model_name=pointnet2_charlesssg dataset=shapenet error occurs .
File "deeppointcloud-benchmarks/src/datasets/batch.py", line 48, in from_data_list batch[key] = torch.stack(batch[key]) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 1576 and 2885 in dimension 1 at /pytorch/aten/src/TH/generic/THTensor.cpp:689

Configuration feedback

Hi,
I am the author of Hydra. Thanks for using it!
Some feedback about your configuration:

In segmentation.yaml, you have a bunch of different models.

A better modeling of the config would (probably) be to use composition to select which segmentation model you are using, and have each one is a different file.

This will work well if you are only using a single model at a time (Without looking at the code, that's most likely the case).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.