GithubHelp home page GithubHelp logo

vinits5 / learning3d Goto Github PK

View Code? Open in Web Editor NEW
699.0 15.0 104.0 223.18 MB

This is a complete package of recent deep learning methods for 3D point clouds in pytorch (with pretrained models).

License: MIT License

Python 80.02% C++ 6.00% Cuda 13.09% C 0.89%

learning3d's Introduction

Learning3D: A Modern Library for Deep Learning on 3D Point Clouds Data.

Documentation | Blog | Demo

Learning3D is an open-source library that supports the development of deep learning algorithms that deal with 3D data. The Learning3D exposes a set of state of art deep neural networks in python. A modular code has been provided for further development. We welcome contributions from the open-source community.

Latest News:

  1. [7 Apr, 2024]: Now, learning3d is available as pypi package.
  2. [24 Oct, 2023]: MaskNet++ is now a part of learning3d library.
  3. [12 May, 2022]: ChamferDistance loss function is incorporated in learning3d. This is a purely pytorch based loss function.
  4. [24 Dec. 2020]: MaskNet is now ready to enhance the performance of registration algorithms in learning3d for occluded point clouds.
  5. [24 Dec. 2020]: Loss based on the predicted and ground truth correspondences is added in learning3d after consideration of Correspondence Matrices are Underrated paper.
  6. [24 Dec. 2020]: PointConv, latent feature estimation using convolutions on point clouds is now available in learning3d.
  7. [16 Oct. 2020]: DeepGMR, registration using gaussian mixture models is now available in learning3d
  8. [14 Oct. 2020]: Now, use your own data in learning3d. (Check out UserData functionality!)

PyPI package setup

Setup from pypi server

pip install learning3d

Setup using code

git clone https://github.com/vinits5/learning3d.git
cd learning3d
git checkout pypi_v0.1.0
python3 -m pip install .

Available Computer Vision Algorithms in Learning3D

Sr. No. Tasks Algorithms
1 Classification PointNet, DGCNN, PPFNet, PointConv
2 Segmentation PointNet, DGCNN
3 Reconstruction Point Completion Network (PCN)
4 Registration PointNetLK, PCRNet, DCP, PRNet, RPM-Net, DeepGMR
5 Flow Estimation FlowNet3D
6 Inlier Estimation MaskNet, MaskNet++

Available Pretrained Models

  1. PointNet
  2. PCN
  3. PointNetLK
  4. PCRNet
  5. DCP
  6. PRNet
  7. FlowNet3D
  8. RPM-Net (clean-trained.pth, noisy-trained.pth, partial-pretrained.pth)
  9. DeepGMR
  10. PointConv (Download from this link)
  11. MaskNet
  12. MaskNet++ / MaskNet2

Available Datasets

  1. ModelNet40

Available Loss Functions

  1. Classification Loss (Cross Entropy)
  2. Registration Losses (FrobeniusNormLoss, RMSEFeaturesLoss)
  3. Distance Losses (Chamfer Distance, Earth Mover's Distance)
  4. Correspondence Loss (based on this paper)

Technical Details

Supported OS

  1. Ubuntu 16.04
  2. Ubuntu 18.04
  3. Ubuntu 20.04.6
  4. Linux Mint

Requirements

  1. CUDA 10.0 or higher
  2. Pytorch 1.3 or higher
  3. Python 3.8

How to use this library?

Important Note: Clone this repository in your project. Please don't add your codes in "learning3d" folder.

  1. All networks are defined in the module "models".
  2. All loss functions are defined in the module "losses".
  3. Data loaders are pre-defined in data_utils/dataloaders.py file.
  4. All pretrained models are provided in learning3d/pretrained folder.

Documentation

B: Batch Size, N: No. of points and C: Channels.

Use of Point Embedding Networks:

from learning3d.models import PointNet, DGCNN, PPFNet
pn = PointNet(emb_dims=1024, input_shape='bnc', use_bn=False)
dgcnn = DGCNN(emb_dims=1024, input_shape='bnc')
ppf = PPFNet(features=['ppf', 'dxyz', 'xyz'], emb_dims=96, radius='0.3', num_neighbours=64)

Sr. No. Variable Data type Shape Choices Use
1. emb_dims Integer Scalar 1024, 512 Size of feature vector for the each point
2. input_shape String - 'bnc', 'bcn' Shape of input point cloud
3. output tensor BxCxN - High dimensional embeddings for each point
4. features List of Strings - ['ppf', 'dxyz', 'xyz'] Use of various features
5. radius Float Scalar 0.3 Radius of cluster for computing local features
6. num_neighbours Integer Scalar 64 Maximum number of points to consider per cluster

Use of Classification / Segmentation Network:

from learning3d.models import Classifier, PointNet, Segmentation
classifier = Classifier(feature_model=PointNet(), num_classes=40)
seg = Segmentation(feature_model=PointNet(), num_classes=40)

Sr. No. Variable Data type Shape Choices Use
1. feature_model Object - PointNet / DGCNN Point cloud embedding network
2. num_classes Integer Scalar 10, 40 Number of object categories to be classified
3. output tensor Classification: Bx40, Segmentation: BxNx40 10, 40 Probabilities of each category or each point

Use of Registration Networks:

from learning3d.models import PointNet, PointNetLK, DCP, iPCRNet, PRNet, PPFNet, RPMNet
pnlk = PointNetLK(feature_model=PointNet(), delta=1e-02, xtol=1e-07, p0_zero_mean=True, p1_zero_mean=True, pooling='max')
dcp = DCP(feature_model=PointNet(), pointer_='transformer', head='svd')
pcrnet = iPCRNet(feature_moodel=PointNet(), pooling='max')
rpmnet = RPMNet(feature_model=PPFNet())
deepgmr = DeepGMR(use_rri=True, feature_model=PointNet(), nearest_neighbors=20)

Sr. No. Variable Data type Choices Use Algorithm
1. feature_model Object PointNet / DGCNN Point cloud embedding network PointNetLK
2. delta Float Scalar Parameter to calculate approximate jacobian PointNetLK
3. xtol Float Scalar Check tolerance to stop iterations PointNetLK
4. p0_zero_mean Boolean True/False Subtract mean from template point cloud PointNetLK
5. p1_zero_mean Boolean True/False Subtract mean from source point cloud PointNetLK
6. pooling String 'max' / 'avg' Type of pooling used to get global feature vectror PointNetLK
7. pointer_ String 'transformer' / 'identity' Choice for Transformer/Attention network DCP
8. head String 'svd' / 'mlp' Choice of module to estimate registration params DCP
9. use_rri Boolean True/False Use nearest neighbors to estimate point cloud features. DeepGMR
10. nearest_neighbores Integer 20/any integer Give number of nearest neighbors used to estimate features DeepGMR

Use of Inlier Estimation Network (MaskNet):

from learning3d.models import MaskNet, PointNet, MaskNet2
masknet = MaskNet(feature_model=PointNet(), is_training=True) masknet2 = MaskNet2(feature_model=PointNet(), is_training=True)

Sr. No. Variable Data type Choices Use
1. feature_model Object PointNet / DGCNN Point cloud embedding network
2. is_training Boolean True / False Specify if the network will undergo training or testing

Use of Point Completion Network:

from learning3d.models import PCN
pcn = PCN(emb_dims=1024, input_shape='bnc', num_coarse=1024, grid_size=4, detailed_output=True)

Sr. No. Variable Data type Choices Use
1. emb_dims Integer 1024, 512 Size of feature vector for each point
2. input_shape String 'bnc' / 'bcn' Shape of input point cloud
3. num_coarse Integer 1024 Shape of output point cloud
4. grid_size Integer 4, 8, 16 Size of grid used to produce detailed output
5. detailed_output Boolean True / False Choice for additional module to create detailed output point cloud

Use of PointConv:

Use the following to create pretrained model provided by authors.

from learning3d.models import create_pointconv
PointConv = create_pointconv(classifier=True, pretrained='path of checkpoint')
ptconv = PointConv(emb_dims=1024, input_shape='bnc', input_channel_dim=6, classifier=True)

OR
Use the following to create your own PointConv model.

PointConv = create_pointconv(classifier=False, pretrained=None)
ptconv = PointConv(emb_dims=1024, input_shape='bnc', input_channel_dim=3, classifier=True)

PointConv variable is a class. Users can use it to create a sub-class to override create_classifier and create_structure methods in order to change PointConv's network architecture.

Sr. No. Variable Data type Choices Use
1. emb_dims Integer 1024, 512 Size of feature vector for each point
2. input_shape String 'bnc' / 'bcn' Shape of input point cloud
3. input_channel_dim Integer 3/6 Define if point cloud contains only xyz co-ordinates or normals and colors as well
4. classifier Boolean True / False Choose if you want to use a classifier with PointConv
5. pretrained Boolean String Give path of the pretrained classifier model (only use it for weights given by authors)

Use of Flow Estimation Network:

from learning3d.models import FlowNet3D
flownet = FlowNet3D()

Use of Data Loaders:

from learning3d.data_utils import ModelNet40Data, ClassificationData, RegistrationData, FlowData
modelnet40 = ModelNet40Data(train=True, num_points=1024, download=True)
classification_data = ClassificationData(data_class=ModelNet40Data())
registration_data = RegistrationData(algorithm='PointNetLK', data_class=ModelNet40Data(), partial_source=False, partial_template=False, noise=False)
flow_data = FlowData()

Sr. No. Variable Data type Choices Use
1. train Boolean True / False Split data as train/test set
2. num_points Integer 1024 Number of points in each point cloud
3. download Boolean True / False If data not available then download it
4. data_class Object - Specify which dataset to use
5. algorithm String 'PointNetLK', 'PCRNet', 'DCP', 'iPCRNet' Algorithm used for registration
6. partial_source Boolean True / False Create partial source point cloud
7. partial_template Boolean True / False Create partial template point cloud
8. noise Boolean True / False Add noise in source point cloud

Use Your Own Data:

from learning3d.data_utils import UserData
dataset = UserData(application, data_dict)

Sr. No. Application Required Key Respective Value
1. 'classification' 'pcs' Point Clouds (BxNx3)
'labels' Ground Truth Class Labels (BxN)
2. 'registration' 'template' Template Point Clouds (BxNx3)
'source' Source Point Clouds (BxNx3)
'transformation' Ground Truth Transformation (Bx4x4)
3. 'flow_estimation' 'frame1' Point Clouds (BxNx3)
'frame2' Point Clouds (BxNx3)
'flow' Ground Truth Flow Vector (BxNx3)

Use of Loss Functions:

from learning3d.losses import RMSEFeaturesLoss, FrobeniusNormLoss, ClassificationLoss, EMDLoss, ChamferDistanceLoss, CorrespondenceLoss
rmse = RMSEFeaturesLoss()
fn_loss = FrobeniusNormLoss()
classification_loss = ClassificationLoss()
emd = EMDLoss()
cd = ChamferDistanceLoss()
corr = CorrespondenceLoss()

Sr. No. Loss Type Use
1. RMSEFeaturesLoss Used to find root mean square value between two global feature vectors of point clouds
2. FrobeniusNormLoss Used to find frobenius norm between two transfromation matrices
3. ClassificationLoss Used to calculate cross-entropy loss
4. EMDLoss Earth Mover's distance between two given point clouds
5. ChamferDistanceLoss Chamfer's distance between two given point clouds
6. CorrespondenceLoss Computes cross entropy loss using the predicted correspondence and ground truth correspondence for each source point

To run codes from examples:

  1. Copy the file from "examples" folder outside of the directory "learning3d"
  2. Now, run the file. (ex. python test_pointnet.py)
  • Your Directory/Location
    • learning3d
    • test_pointnet.py

References:

  1. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
  2. Dynamic Graph CNN for Learning on Point Clouds
  3. PPFNet: Global Context Aware Local Features for Robust 3D Point Matching
  4. PointConv: Deep Convolutional Networks on 3D Point Clouds
  5. PointNetLK: Robust & Efficient Point Cloud Registration using PointNet
  6. PCRNet: Point Cloud Registration Network using PointNet Encoding
  7. Deep Closest Point: Learning Representations for Point Cloud Registration
  8. PRNet: Self-Supervised Learning for Partial-to-Partial Registration
  9. FlowNet3D: Learning Scene Flow in 3D Point Clouds
  10. PCN: Point Completion Network
  11. RPM-Net: Robust Point Matching using Learned Features
  12. 3D ShapeNets: A Deep Representation for Volumetric Shapes
  13. DeepGMR: Learning Latent Gaussian Mixture Models for Registration
  14. CMU: Correspondence Matrices are Underrated
  15. MaskNet: A Fully-Convolutional Network to Estimate Inlier Points
  16. MaskNet++: Inlier/outlier identification for two point clouds

learning3d's People

Contributors

fnuabhimanyu avatar kidpaul94 avatar vinits5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learning3d's Issues

ModuleNotFoundError: No module named 'learning3d'

Hi, thank you so much for your repository. I am facing an issue when trying to use it, and I was hoping someone could shed some light on how to fix it.
When I try importing it as:

from learning3d.models import create_pointconv

I get a "ModuleNotFoundError: No module named 'learning3d'".

I have the learning3d folder in my directory and although I am trying to run it on my own code, I wasn't able to run it using the test_pointconv.py as suggested in the README file (I copied it out of the examples folder and put it in my directory).
I've checked the name of the folder, and the name I am using to import, and it seems to be alright. The folder shows up in my terminal when I list it.

I have tried cloning it in a new separate project and the same happened. I am not sure how to proceed, what is it I am missing?

Thank you for your time!

pointnet2 module error

Error raised in pointnet2 module in utils!
Either don't use pointnet2_utils or retry it's setup
Error in pointnet2_utils! Retry setup for pointnet2_utils.

how can i solve it?

train and test Point completion Network with custom dataset.

First thank you very much for this incredible library. I have some question which can not find the answer in the issues.

I saw that you provide a code in the part "use your own data". Based on the table you mentioned we can use this dataloader just for Classification, Registration and flow_estimation though I want to use it for preparing the data of Point Completion Network. should I write a new dataloder? cause I don't exactly know should be the format of train and test data for PCN. Where can I find more information about it?

Possible error with Masknet?

Hi, thank you for your great work @vinits5! I'm looking forward to test Masknet on my own RGBD data.
I just encountered this following issue when running test_masknet.py in Google colab:

from learning3d.models import MaskNet

Error raised in pointnet2 module in utils!
Either don't use pointnet2_utils or retry it's setup.
INFO - 2021-05-17 01:41:24,679 - ppfnet - Using early fusion, feature dim = 96
INFO - 2021-05-17 01:41:24,680 - ppfnet - Feature extraction using features xyz, dxyz, ppf
Traceback (most recent call last):
  File "test_masknet.py", line 21, in <module>
    from learning3d.models import MaskNet
  File "/content/drive/MyDrive/learning3d/models/__init__.py", line 16, in <module>
    from .deepgmr import DeepGMR
  File "/content/drive/MyDrive/learning3d/models/deepgmr.py", line 162
    'r': template_features - source_features,
      ^
SyntaxError: invalid syntax 

I'm not exactly sure whether this syntax error was already there, or something happened because I tried to modify some part of test_masknet.py so that I can directly feed .ply file (instead of using .hdf5 format):

PPF-FoldNet

Thanks for this neat and necessary library @vinits5. I see that there is plan to integrate PPF-FoldNet in the future as well. We are still unable to release the official sources due to some regulations but I wanted to point you to @XuyangBai 's implementation : https://github.com/XuyangBai/PPF-FoldNet which can be a fair starting point. Also feel free to contact us for related questions.

Error while running setup.py for pointnet2_cuda

Hi,
when I run the file setup.py for pointnet2_cuda, I am getting the following error:

error: wrong number of template arguments (5, should be 2)
return _and<_not<is_same<tuple<_Elements...>,
^
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
struct is_convertible
^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<const c10::optional&, const c10::VaryingShape&, const c10::VaryingShape&, const c10::optional&>&&; bool = true; _Elements = {const c10::optional&, const c10::VaryingShape&, const c10::VaryingShape&, const c10::optional&}]’ not a return-statement
}
^
error: command '/usr/bin/nvcc' failed with exit status 1

PCN Model: ClassificationData: Line 9:

Hello @vinits5, Thanks for the repo. I am facing a challenge in a segment of your code.

I am not able to understand ClassificationData class. 'pcs' and 'labels' are passed in find_attribute method and later the shape is checked in check_data method. since the arguments are strings, I am getting confused with the purpose.

As per the invocation of Classification data is getting dataloader with 2 dimension shape(1024,3)
self.pcs = self.find_attribute('pcs')

Further, I am getting an error as mentioned below:-

assert 1 < len(self.pcs.shape) < 4, "Error in dimension of point clouds! Given data dimension: {}".format(
AttributeError: 'NoneType' object has no attribute 'shape'

Problem loading the pretrained model in PCN

Hi there. Thanks for this repository.
I think there are some problems in model constructions, the model files and pretrained model dictionary seem to be not matching.
I see problems while loading a pretrained model. Such as in point completion network:

from learning3d.models import PCN
import torch
pcn = PCN()
pcn.load_state_dict(torch.load('learning3d/pretrained/exp_pcn/models/best_model.t7', map_location='cpu'))

and this is the error I get:

RuntimeError: Error(s) in loading state_dict for PCN:
        Missing key(s) in state_dict: "conv5.weight", "conv5.bias", "conv6.weight", "conv6.bias", "conv7.weight", "conv7.bias". 

Do you know what the problem is in here?

ModuleNotFoundError when running examples

Sorry for the noob question, but how do you run the training/test examples located inside learning3d/examples?
I've installed all the requirements and I'm not using virtualenv, yet this is the error I get:

Traceback (most recent call last):
  File "train_pointnet.py", line 20, in <module>
    from learning3d.models import PointNet
ModuleNotFoundError: No module named 'learning3d'

I know I can get rid of that error with the following (and editing checkpoints paths I suppose):

# from learning3d.models import PointNet
# from learning3d.models import Classifier
# from learning3d.data_utils import ClassificationData, ModelNet40Data
from models import PointNet
from models import Classifier
from data_utils import ClassificationData, ModelNet40Data

But I was curious how you run these examples directly without editing any script (since I can't see any setup.py at the root forlder to make this folder a python module)?

prepare data to train

i want to train a model to register and match 2 point cloud to classify objects , how can I use these repo to train my data?

'RegistrationData' object has no attribute 'use_rri'

Hi, when i run test_pnlk.py , it showed:

Traceback (most recent call last):
File "test_pnlk.py", line 121, in
main()
File "test_pnlk.py", line 118, in main
test(args, model, test_loader)
File "test_pnlk.py", line 61, in test
test_loss, test_accuracy = test_one_epoch(args.device, model, test_loader)
File "test_pnlk.py", line 42, in test_one_epoch
for i, data in enumerate(tqdm(test_loader)):
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/tqdm/std.py", line 1195, in _iter_
for obj in iterable:
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in _next_
data = self._next_data()
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File ".../miniconda3/envs/pytorch-point/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File ".../learning3d/data_utils/dataloaders.py", line 257, in _getitem_
if self.use_rri:
AttributeError: 'RegistrationData' object has no attribute 'use_rri'

somesone said that maybe because of the version of the pytorch and torchvision.
My pytorch: 1.7.1, torchvision: 0.8.2
Could you tell me your environment setting?
or
How to do i could solve it?

Thanks!

Is this library functional?

I am trying to use your library in a project, but I constantly get problems?
Is it functional?
Also you don't have github versions or releases so I can't pick an older functional part of it.

cannot install in pypi

can creatro build a setup.py to push to pypi?I cannot find the opensource utils in pypi,if it do this,it will be perfect

Possible error in train_rpmnet.py

I just encountered this following issue when running examples/train_rpmnet.py.

Namespace(batch_size=10, dataset_path='ModelNet40', dataset_type='modelnet', device='cuda:0', emb_dims=1024, epochs=200, eval=False, exp_name='exp_rpmnet', fine_tune_pointnet='tune', num_points=1024, optimizer='Adam', pretrained='', resume='', seed=1234, start_epoch=0, symfn='max', transfer_ptnet_weights='./ch
eckpoints/exp_classifier/models/best_ptnet_model.t7', workers=4)
  0%|                                                                                                                                                                                                                                                                                         | 0/984 [00:00<?, ?it/s]C
:\tools\Anaconda3\envs\compress\lib\site-packages\torch\nn\_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
  warnings.warn(warning.format(ret))
  0%|                                                                                                                                                                                                                                                                                         | 0/984 [00:19<?, ?it/s]
Traceback (most recent call last):
  File "C:/Project/PointCloudLearning/examples/train_rpmnet.py", line 228, in <module>
    main()
  File "C:/Project/PointCloudLearning/examples/train_rpmnet.py", line 225, in main
    train(args, model, train_loader, test_loader, boardio, textio, checkpoint)
  File "C:/Project/PointCloudLearning/examples/train_rpmnet.py", line 113, in train
    train_loss = train_one_epoch(args.device, model, train_loader, optimizer)
  File "C:/Project/PointCloudLearning/examples/train_rpmnet.py", line 85, in train_one_epoch
    loss_val = FrobeniusNormLoss()(output['est_T'], igt) + RMSEFeaturesLoss()(output['r'])
KeyError: 'r'

Process finished with exit code 1

pointnet2_cuda not in the repository?

Hi @vinits5. First of all, thanks a lot for sharing this library, it looks really interesting.

I am trying to compile the library and run the test_dcp.py script. However, the code complains and does not work because "pointnet2_cuda" cannot be imported. Is the file missing from the repository or is there something else?

Cheers!

pcd to tensor

Hi there. Do you have any idea on how to transfer the cloud points to a 3D tensor?

Using our own dataset

Hi !

First thank you very much for this incredible library !

I would like to use your library to do point cloud registration with two pointclouds but I am struggling to see how to do it. Is there any function that I missed ? Thank you in advance for your answer.

No module named 'emd'

Hi,

When I use EMD loss in losses.py, it showed:

File "/XXX/learning3d/losses/emd.py", line 6, in emd
from emd import EMDLoss
ModuleNotFoundError: No module named 'emd'

Segmentation not implemented

Hello,

I wanted to try out the PointNet for segmentation, but I couldn't find any implementations of it in the examples. It would be great if there could be some additional examples or code snippets showcasing how to use PointNet for segmentation tasks.

Issue in running PointConv and PPFNet for classification

I am getting the following error in running PointConv and PPFNet for classification. Thanks in advance for any suggestions.

**python train_PointConv_Narges.py --nclasses 9
all_data.shape (9840, 2048, 3) <class 'numpy.ndarray'>
all_label.shape (9840, 1) <class 'numpy.ndarray'>
all_data.shape (9840, 2048, 3) <class 'numpy.ndarray'>
all_label.shape (9840, 1) <class 'numpy.ndarray'>
Error raised in pointnet2 module in utils!
Either don't use pointnet2_utils or retry it's setup.
Error in pointnet2_utils! Retry setup for pointnet2_utils.
cp: cannot stat 'main.py': No such file or directory
cp: cannot stat 'model.py': No such file or directory
Namespace(batch_size=32, dataset_path='/media/emre/Data/Downloads/learning3d/../../ModelNet40/ModelNet40', dataset_type='modelnet', device='cuda:0', emb_dims=512, epochs=200, eval=False, exp_name='exp_classifier', nclasses=9, num_points=500, optimizer='Adam', pointnet='tune', pretrained='', resume='', seed=1234, start_epoch=0, symfn='max', workers=4)
(762, 500, 6)
(189, 500, 6)
0%| | 0/23 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train_PointConv_Narges.py", line 253, in
main()
File "train_PointConv_Narges.py", line 250, in main
train(args, model, train_loader, test_loader, boardio, textio, checkpoint)
File "train_PointConv_Narges.py", line 129, in train
train_loss, train_accuracy = train_one_epoch(args.device, model, train_loader, optimizer)
File "train_PointConv_Narges.py", line 93, in train_one_epoch
output = model(points)
File "/home/emre/anaconda3/envs/learning3d3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/media/emre/Data/Downloads/learning3d/learning3d/models/classifier.py", line 23, in forward
output = self.pooling(self.feature_model(input_data))
File "/home/emre/anaconda3/envs/learning3d3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/media/emre/Data/Downloads/learning3d/learning3d/models/pooling.py", line 16, in forward
return torch.max(input, 2)[0].contiguous()
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
(learning3d3) emre@emre:/media/emre/Data/Downloads/learning3d$ python train_PointConv_Narges.py --nclasses 9
all_data.shape (9840, 2048, 3) <class 'numpy.ndarray'>
all_label.shape (9840, 1) <class 'numpy.ndarray'>
all_data.shape (9840, 2048, 3) <class 'numpy.ndarray'>
all_label.shape (9840, 1) <class 'numpy.ndarray'>
Error raised in pointnet2 module in utils!
Either don't use pointnet2_utils or retry it's setup.
Error in pointnet2_utils! Retry setup for pointnet2_utils.
cp: cannot stat 'main.py': No such file or directory
cp: cannot stat 'model.py': No such file or directory
Namespace(batch_size=32, dataset_path='/media/emre/Data/Downloads/learning3d/../../ModelNet40/ModelNet40', dataset_type='modelnet', device='cuda:0', emb_dims=512, epochs=200, eval=False, exp_name='exp_classifier', nclasses=9, num_points=500, optimizer='Adam', pointnet='tune', pretrained='', resume='', seed=1234, start_epoch=0, symfn='max', workers=4)
(762, 500, 6)
(189, 500, 6)
0%| | 0/23 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train_PointConv_Narges.py", line 253, in
main()
File "train_PointConv_Narges.py", line 250, in main
train(args, model, train_loader, test_loader, boardio, textio, checkpoint)
File "train_PointConv_Narges.py", line 129, in train
train_loss, train_accuracy = train_one_epoch(args.device, model, train_loader, optimizer)
File "train_PointConv_Narges.py", line 93, in train_one_epoch
output = model(points)
File "/home/emre/anaconda3/envs/learning3d3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/media/emre/Data/Downloads/learning3d/learning3d/models/classifier.py", line 23, in forward
output = self.pooling(self.feature_model(input_data))
File "/home/emre/anaconda3/envs/learning3d3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, kwargs)
File "/media/emre/Data/Downloads/learning3d/learning3d/models/pooling.py", line 16, in forward
return torch.max(input, 2)[0].contiguous()
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)

Transfer learning for Masknet?

Hello again!

I was trying to use Masknet to do inlier estimation for my own object. But the model didn't show good performance in the estimation even if I normalized both point clouds and they were already manually registered. This might have happened because my source miss almost 60% of geometrical feature that my target has (due to sensor noise and occlusion) or point density between pcs are different. But, I also presume that this can happen because my own object is different from objects used for training Masknet. Could you provide me some information to do transfer learning for Masknet (if possible)?

IdentationError

File "/content/learning3d/data_utils/dataloaders.py", line 259
source = np.concatenate([source, self.get_rri(source - source.mean(axis=0), self.nearest_neighbors)], axis=1)
^
IndentationError: unindent does not match any outer indentation level

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.