GithubHelp home page GithubHelp logo

3dnetworkspytorch's Introduction

3DNetworksPytorch

- Looking for new papers to implement in pytorch ! Go comment in the dedicated issue, papers you would like to see implemented using pytorch !!

This repository is mostly implementation of papers using the pytorch framework, PLEASE cite the corresponding papers before referencing to this work. User discretion is advised concerning accuracy and readiness of my implementations, please create issues when you encounter problems and I will try my best to fix them.

This repository is meant as way to learn by implementating them, different 3D deep learning architectures for pointclouds. I haven't tested them on benchmark datasets for the papers, only on some toy examples. If You spot any mistake, I am open to pull requests and any colaboration on the topic.

(I haven't cleaned the code completly so it might seem a bit messy at first sight) Most of the networks are using the cuda code in cppattempt. Please go in there and install the extension (python setup.py install), so that they can import it. The only things required should be pytorch 1.0+ and the corresponding cudatoolkit, everything configured correctly obviously. See pytorch explanations for how to compile C++ extensions.

Table of Contents

  1. PointSift
  2. PointCNN
  3. PointNet++
  4. Cuda_Extension
  5. 3D-BoNet
  6. SPGN
  7. PCN
  8. 3D_completion_challenge
  9. FastFCN 10.CSRNet

PointSift

An implementation of PointSift using Pytorch (https://arxiv.org/pdf/1807.00652.pdf) lies in the PoinSift folder. The C_utils folder contains some algorithms inplemented in CUDA and C++ taken from the original implementation of PointSift (https://github.com/MVIG-SJTU/pointSIFT) but wrapped to be used with Pytorch Tensor directly.

PointCNN

An implementation of PointCNN using Pytorch (https://arxiv.org/pdf/1801.07791.pdf) lies in the PointCNN folder.

PointNet++

An implementation of PointNet++ using Pytorch (https://arxiv.org/pdf/1706.02413.pdf) lies in the PointNet++ folder. It uses the same algorithms on GPU as PointSift as Pointsift uses Pointnet++ modules.

Cuda_Extension

There are two versions of the cuda extensions for pointnet and pointsift. The first one is in C_utils and was implemented using the old C api for torch. As it is now deprecated in newer version of pytorch and they recommend using the C++ extension api, I did an attempt in cppattempt folder.

3D-BoNet

Quick implementation of 3D-BoNet (https://arxiv.org/pdf/1906.01140.pdf) https://gist.github.com/lelouedec/5a7ba5547df5cef71b50ab306199623f using pytorch. All in one file, need to compile C++ pointnet extension. Code not converging for bounding boxes regressions

SPGN

Implementation of SGPN (https://arxiv.org/pdf/1711.08588.pdf) based on Pointnet implementation.

PCN

Implementation of PCN (PCN: Point Completion Network) (https://arxiv.org/pdf/1808.00671.pdf) (https://github.com/wentaoyuan/pcn) using pytorch. For the chamfer distance and the EMD loss, I used inplementation from respectively https://github.com/chrdiller/pyTorchChamferDistance and https://github.com/daerduoCarey/PyTorchEMD. See these repositories for how to use them. Copy emd.py and the compiled ".so" lib to the same directory of your model and it should be fine. Tested with the PCN paper shapenet data, download it from the google drive provided in their repository. The dataloader will help loading the pointclouds from the shapenet directory. See following screenshot for example (Left is groundtruth, middle the input and right the output of the network): Example for pcn

3D_completion_challenge

A new 3D completion challenge is available here : https://github.com/lynetcha/completion3d it includes PCN (seen above)

FastFCN

Two files implementation of the fast fcn paper based on their own implementation. Go check the paper and git for more details : https://arxiv.org/pdf/1903.11816.pdf https://github.com/wuhuikai/FastFCN

CSRNet

Implementation of the model used in the paper : https://arxiv.org/pdf/1802.10062.pdf. It is only one of the variation but the one used and advertised by the author. The output as in the paper needs to be upsampled to compare to the original image.

As asked:

@software{lelouedec_2020_3766070,
  author       = {lelouedec},
  title        = {lelouedec/3DNetworksPytorch: pre-alpha},
  month        = apr,
  year         = 2020,
  version      = {0.1},
}

3dnetworkspytorch's People

Contributors

lelouedec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

3dnetworkspytorch's Issues

PointCNN input format

Thank you so much for providing your code!!
Can I ask you what is the format of input point cloud for PointCNN?
Is it just x,y,z? because my dataset does not have any additional feature.

Error while running main.py

While running main.py encountered with an error. Couldn't find a solution to this.

Traceback (most recent call last):
File "main.py", line 2, in <module>
import models
File "/home/ubuntu/3DNetworksPytorch/PCN/models.py", line 8, in <module>
from emd import earth_mover_distance
File "/home/ubuntu/3DNetworksPytorch/PCN/emd.py", line 2, in <module>
import emd_cuda
ImportError: /home/ubuntu/3DNetworksPytorch/PCN/emd_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: __cudaPopCallConfiguration

code missing for load_data

Your pytorch version of SPGN is the only one I can find online. This is very much appreciated!

To run the SPGN_train, I don't know where to find the load_data code. It should parse the test folder and load all the files there.

import load_data

print("loading data...")
points,colors,annotations,centers,bb_list,mask_list,label_list = load_data.load_data("../test")

It might be a while, but is it possible to list your train/test data file structure so that people can run your code without too much trouble?

Thanks,

DGCNN Edge Conv

Hi, I have a question about Edge Conv implementation of yours.
EdgeConv constructs K number of local graph making the shape as Batch_size * C * Pts * K.
According to your implementation, kernel 1 * 1 is applied and it means that the local kernel weights are all same.
So, how does it extract local features only using 1 * 1 conv? (The neighbouring point features will only interact at max-pool layer)
In image convolution, weights in each kernel is different.
For example, if there is 3 * 3 kernel conv, current dgcnn has same weight across the kernel.
Then, doesn't it make more sense to use (1 * K) like x-conv from PointCNN to capture local features?
What is your reason behind using 1 * 1 convolution rather than 1 * K convolution?
Thank you!

c++ extension related

Thank you for a great job!
Would like to ask, using c++ extension to write algorithm, to achieve the effect of speed up, what knowledge should be learned in this step, is it difficult for a c++ beginner?
Also, do c++ extensions just increase speed, and does that affect memory footprint?
...
Looking forward to your reply!

PCN

Which are the versions you used for implementing PCN. Like python, pytorch, torchvision, cudatoolkit

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. in PointCNN

`RuntimeError                              Traceback (most recent call last)
<ipython-input-8-a4f158041f7d> in <module>()
    229     model = PointCnnLayer(xyz,["features"],[ xconv_params,xdconv_params,fc_params ]).cuda()
    230     #print(model)
--> 231     out = model(xyz)
    232     print(out.shape)

8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

<ipython-input-8-a4f158041f7d> in forward(self, x)
    108         #pts_idx = pts_idx[:,::self.settings[0][0].get('D'),:]
    109         pts_regional = torch.stack([x[n][idx,:] for n, idx in enumerate(torch.unbind(pts_idx, dim = 0))], dim = 0)
--> 110         out = self.xvonc1(rep_pts,pts_regional,None) ## FTS
    111         layer_pts.append(rep_pts)
    112         outs.append(out)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

<ipython-input-6-9e8ae6e637fc> in forward(self, rep_pt, pts, fts)
    105         X_shape = (N, P, self.K, self.K)
    106         X = pts_local.permute(0,3,1,2)
--> 107         X = self.conv1(X)
    108         X = X.permute(0,2,3,1)
    109         X = self.x_dense1(X)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input)
     98     def forward(self, input):
     99         for module in self:
--> 100             input = module(input)
    101         return input
    102 

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    343 
    344     def forward(self, input):
--> 345         return self.conv2d_forward(input, self.weight)
    346 
    347 class Conv3d(_ConvNd):

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight)
    340                             _pair(0), self.dilation, self.groups)
    341         return F.conv2d(input, weight, self.bias, self.stride,
--> 342                         self.padding, self.dilation, self.groups)
    343 
    344     def forward(self, input):

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.`

i am running the code "3DNetworksPytorch/PointCNN/PointCNN.py " in Google Colab with latest pytorch, cuda version. @lelouedec can you please look into it?

SGPN implementation

Hey,

Thank you for your great effort! Could you provide a evaluation result of your implementation of SGPN?

Best,

Error while runnng models.py in PCN

While running the models.py module with same code, we came across an error as follows.

ubuntu@new:~/3DNetworksPytorch/PCN$ python models.py
THCudaCheck FAIL file=/home/ubuntu/3DNetworksPytorch/PCN/cuda/emd_kernel.cu line=192 error=98 : invalid device function
Traceback (most recent call last):
File "models.py", line 146, in
net.create_loss(coarse,fine,xyz,1.0)
File "models.py", line 124, in create_loss
loss_coarse = earth_mover_distance(coarse, gt_ds, transpose=False)
File "/home/ubuntu/3DNetworksPytorch/PCN/emd.py", line 44, in earth_mover_distance
cost = EarthMoverDistanceFunction.apply(xyz1, xyz2)
File "/home/ubuntu/3DNetworksPytorch/PCN/emd.py", line 11, in forward
match = emd_cuda.approxmatch_forward(xyz1, xyz2)
RuntimeError: cuda runtime error (98) : invalid device function at /home/ubuntu/3DNetworksPytorch/PCN/cuda/emd_kernel.cu:192
Segmentation fault (core dumped)

Implementation help

Can you help by providing a step by execution of PCN pytorch version, As a beginner I find it quite confusing. I will be of great help of you you could provide one.

Regarding data

When I checked the repo, I understood that you have put the code for testing i.e., the dataloader.py is loading test data and not train data? What are the changes to be made so that I can perform training? There is no train folder in shapenet (i saw train.lmdb train.list)

point.select_cube: cuda device problem

Thank you for your great work.

I installed 'point' module successfully, and everything works great.
But my computer has multi gpu and when I load the model and data on the gpu, except for gpu 0,
something allocate the memory on gpu 0 after the code pass the point.select_cube() function.
Because of this problem, if I run 4 codes which contain point.select_cube() function on 4 gpus, gpu 0 only wiil allocate more memory than any other gpus.

Could you help to solve this problem?

Thank you.

New implementations !

If you found this repository and/or starred it, you might interested by this thread !

I am looking for new papers to implement and try for 3D vision. If they are not implemented in pytorch or with an implementation you find confusing, please comment with their link to the paper and/or repository !

I will try to implement interesting ones.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.