GithubHelp home page GithubHelp logo

krrish94 / chamferdist Goto Github PK

View Code? Open in Web Editor NEW
270.0 7.0 48.0 35 KB

Pytorch package to compute Chamfer distance between point sets (pointclouds).

License: Other

Python 27.28% Cuda 59.38% C++ 12.79% C 0.55%

chamferdist's Introduction

chamferdist: PyTorch Chamfer distance

NOTE: This implementation was stolen from the pytorch3d repo, and all I did was to simply repackage it.

krrish94

A simple example Pytorch module to compute Chamfer distance between two pointclouds.

Installation

You can install the package using pip.

pip install chamferdist

Building from source

In your favourite python/conda virtual environment, execute the following commands.

NOTE: This assumes you have PyTorch installed already (preferably, >= 1.5.0; untested for earlier releases).

python setup.py install

Running (example)

That's it! You're now ready to go. Here's a quick guide to using the package. Fire up a terminal. Import the package.

>>> import torch
>>> from chamferdist import ChamferDistance

Create two random pointclouds. Each pointcloud is a 3D tensor with dimensions batchsize x number of points x number of dimensions.

>>> source_cloud = torch.randn(1, 100, 3).cuda()
>>> target_cloud = torch.randn(1, 50, 3).cuda()

Initialize a ChamferDistance object.

>>> chamferDist = ChamferDistance()

Now, compute Chamfer distance.

>>> dist_forward = chamferDist(source_cloud, target_cloud)
>>> print(dist_forward.detach().cpu().item())

Here, dist is the Chamfer distance between source_cloud and target_cloud. Note that Chamfer distance is not bidirectional (and, in stricter parlance, it is not a distance metric).

The Chamfer distance in the backward direction, i.e., target_cloud to source_cloud can be computed in two ways. The naive way is to simply flip the order of the arguments, i.e.,

>>> dist_backward = chamferDist(target_cloud, source_cloud)

Another way is to use the reverse flag provided by the ChamferDistance module, i.e.,

>>> dist_backward = chamferDist(source_cloud, target_cloud, reverse=True)
>>> print(dist_backward.detach().cpu().item())

Typically, a symmetric version of the Chamfer distance is obtained, by summing the "forward" and the "backward" Chamfer distances. This is supported by the bidirectional flag.

>>> dist_bidirectional = chamferDist(source_cloud, target_cloud, bidirectional=True)
>>> print(dist_bidirectional.detach().cpu().item())

Look at the example script for more details: example.py

Citing (the original implementation, PyTorch3D)

If you find this work useful, you might want to cite the original implementation from which this codebase was borrowed (stolen!) - PyTorch3D.

@article{ravi2020pytorch3d,
    author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
                  and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
    title = {Accelerating 3D Deep Learning with PyTorch3D},
    journal = {arXiv:2007.08501},
    year = {2020},
}

chamferdist's People

Contributors

charlieppark avatar haritha-j avatar krrish94 avatar saryazdi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

chamferdist's Issues

How to set K more than 1

Hi, thanks for sharing the code.

I was using chamferdist as my model loss, and it works very well.

And now I saw the 1.0.0 version of chamferdist has KNN points,

how can I switch the chamferdist into KNN points? Just set K into a number that what I want?

Or I have to call the KNN_points function in chamfer.py file?
Because I did't see this in Example.py file.


Oh, one more question.

when I use chamferdist in version 0.3.0,

I just call chamfer function like: dist1, dist2, idx1, idx2 = chamfer(source_pts, target_pts)
and then (torch.mean(dist1)) + (torch.mean(dist2)) to get the chamfer loss
which I can get the chamfer distance from source_pts to target_pts and the chamfer distance from target_pts to source_pts at the same time, right? (Because I'm not pretty sure about this.)

And now in 1.0.0 version, I have to change the code into
dist1, idx1 = chamfer(source_pts, target_pts) and dist2, idx2 = chamfer(target_pts, source_pts)
because chamferdist only calculate in one way in version 1.0.0, right?

Thanks!

ImportError: cannot import name '_C'

Hi, I just update my chamferdist package from 0.3.0 into 1.0.0.
I use Pycharm to install the chamferdist package
and my environment is:
torch => 1.4.0
torchvision => 0.5.0

And when I run the Example.py file
I got an error: ImportError: cannot import name '_C'
which locate that in chamfer.py file line 12 "from chamferdist import _C"

How can I solve it ?

p.s. I have tried to reinstall the package, but no use.

Undefined variable in error message

HI!

In line 36 and 40 of chamfer.py "pts" is an undefined variable, it's a small detail since if the error is reached the code would raise an exception anyways but for added clarity it should be source_cloud and target_cloud

problem when grad backward

Thanks for your work, I have used your code for pytorch to evaluate two points set,but problems happened when I want the dist to backward.
TypeError: backward() takes 3 positional arguments but 5 were given.
Really dont know what happened.

Distance Not Normalized

a = torch.zeros(21).view(1,-1,3)
b = a + 1
dist = chamferDist(a, b)

The expected distance between a and b should be sqrt(3), however, chamferDist(a,b) returns 21. Is it because the distance not divided by the number of points?

Ability to return non-reduced Chamfer loss

Hey, thanks for the repo, I've found it quite useful. One feature I've needed is the ability to return chamfer loss for each point cloud pair, rather than a reduced version through 'mean' or 'sum'. This was already present in the pytorch3D version.

I was hoping you could add this back as I think it would be rather useful, and it only requires loosening the validation to add back the functionality. I'll be happy to just submit a PR from my fork if you think this would be a useful addition.

Add point reduction parameter

I was using this chamferdist package and Pytorch3D for getting chamfer distance of point clouds.

I found that there is a difference in results compared to that of Pytorch3D. #14

It was because this package only supports control of batch reduction method.

In detail, pytorch uses mean for both batch reduction and point reduction by default as the code below:

pytorch3d.loss.chamfer_distance(x, y, ..., batch_reduction: Optional[str] = 'mean', point_reduction: str = 'mean', ...)[source]

However, this package uses mean as batch reduction but uses sum as point reduction, and doesn't support control of point reduction method.

Thus I added point reduction parameter.

Now the results are the same with that of Pytorch3D if we set both reduction as "mean" and bidirectional=True

So I opened a PR and I didn't know that I might open an issue first.

@krrish94 Can you check if #19 and source of it is ok?

BTW, thank you for making this package. It helped me a lot :)

Doesn't work in 2D

Use the sample code in example, but use 2 dimension instead of 3.

Functions compute without throwing an error, but the index returned are false.

Error: Not compiled with cuda

Hi,

so I pip install chamferdist, installed via pip. But when I run my training loop where both source and target is in cuda.(), I get the following error:

Traceback (most recent call last):
  File "train.py", line 114, in <module>
    loss_net = chamferDist(pred, gt, bidirectional=True)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/chamferdist/chamfer.py", line 82, in forward
    K=1,
  File "/opt/conda/lib/python3.7/site-packages/chamferdist/chamfer.py", line 267, in knn_points
    p1, p2, lengths1, lengths2, K, version, return_sorted
  File "/opt/conda/lib/python3.7/site-packages/chamferdist/chamfer.py", line 162, in forward
    idx, dists = _C.knn_points_idx(p1, p2, lengths1, lengths2, K, version)
RuntimeError: Not compiled with GPU support.

nvcc fatal : Unsupported gpu architecture 'compute_86'

Unable to install from pip or compile from source for CUDA 11.0 (Using Ampere GPUs)
Any tips to fix?

Here's the complete log (it's large):

Building wheels for collected packages: chamferdist
  Building wheel for chamferdist (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/shrek/miniconda3/envs/cleargrasp/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3kcapdc7
       cwd: /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/
  Complete output (105 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.8
  creating build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/__init__.py -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/chamfer.py -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/version.py -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/knn.cu -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/mink.cuh -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/index_utils.cuh -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/dispatch.cuh -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/cutils.h -> build/lib.linux-x86_64-3.8/chamferdist
  copying chamferdist/knn.h -> build/lib.linux-x86_64-3.8/chamferdist
  running build_ext
  building 'chamferdist._C' extension
  creating /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8
  creating /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist
  Emitting ninja build file /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/3] /usr/local/cuda-11.0/bin/nvcc -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn.cu -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
  FAILED: /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn.o
  /usr/local/cuda-11.0/bin/nvcc -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn.cu -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
  nvcc fatal   : Unsupported gpu architecture 'compute_86'
  [2/3] c++ -MMD -MF /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn_cpu.o.d -pthread -B /home/shrek/miniconda3/envs/cleargrasp/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn_cpu.cpp -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn_cpu.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                   from /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn_cpu.cpp:3:
  /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  [3/3] c++ -MMD -MF /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/ext.o.d -pthread -B /home/shrek/miniconda3/envs/cleargrasp/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/ext.cpp -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/ext.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                   from /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/ext.cpp:1:
  /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build
      subprocess.run(
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/subprocess.py", line 512, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
  
  The above exception was the direct cause of the following exception:
  
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py", line 76, in <module>
      setup(
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
      return distutils.core.setup(**attrs)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 966, in run_commands
      self.run_command(cmd)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run
      self.run_command('build')
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build.py", line 135, in run
      self.run_command(cmd_name)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
      _build_ext.run(self)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 340, in run
      self.build_extensions()
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions
      build_ext.build_extensions(self)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
      self._build_extensions_serial()
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
      self.build_extension(ext)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
      _build_ext.build_extension(self, ext)
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
      objects = self.compiler.compile(sources,
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 491, in unix_wrap_ninja_compile
      _write_ninja_file_and_compile_objects(
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1250, in _write_ninja_file_and_compile_objects
      _run_ninja_build(
    File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build
      raise RuntimeError(message) from e
  RuntimeError: Error compiling objects for extension
  ----------------------------------------
  ERROR: Failed building wheel for chamferdist
  Running setup.py clean for chamferdist
Failed to build chamferdist
Installing collected packages: Pillow, trimesh, pyglet, chamferdist, seg-lapa
  Attempting uninstall: Pillow
    Found existing installation: Pillow 8.0.1
    Uninstalling Pillow-8.0.1:
      Successfully uninstalled Pillow-8.0.1
    Running setup.py install for chamferdist ... error
    ERROR: Command errored out with exit status 1:
     command: /home/shrek/miniconda3/envs/cleargrasp/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-cavgkf1w/install-record.txt --single-version-externally-managed --compile --install-headers /home/shrek/miniconda3/envs/cleargrasp/include/python3.8/chamferdist
         cwd: /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/
    Complete output (107 lines):
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.8
    creating build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/__init__.py -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/chamfer.py -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/version.py -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/knn.cu -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/mink.cuh -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/index_utils.cuh -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/dispatch.cuh -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/cutils.h -> build/lib.linux-x86_64-3.8/chamferdist
    copying chamferdist/knn.h -> build/lib.linux-x86_64-3.8/chamferdist
    running build_ext
    building 'chamferdist._C' extension
    creating /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8
    creating /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist
    Emitting ninja build file /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/build.ninja...
    Compiling objects...
    Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
    [1/3] /usr/local/cuda-11.0/bin/nvcc -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn.cu -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
    FAILED: /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn.o
    /usr/local/cuda-11.0/bin/nvcc -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn.cu -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
    nvcc fatal   : Unsupported gpu architecture 'compute_86'
    [2/3] c++ -MMD -MF /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn_cpu.o.d -pthread -B /home/shrek/miniconda3/envs/cleargrasp/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn_cpu.cpp -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/knn_cpu.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                     from /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/knn_cpu.cpp:3:
    /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    [3/3] c++ -MMD -MF /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/ext.o.d -pthread -B /home/shrek/miniconda3/envs/cleargrasp/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ichamferdist -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/TH -I/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/shrek/miniconda3/envs/cleargrasp/include/python3.8 -c -c /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/ext.cpp -o /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/build/temp.linux-x86_64-3.8/chamferdist/ext.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                     from /tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/chamferdist/ext.cpp:1:
    /home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    ninja: build stopped: subcommand failed.
    Traceback (most recent call last):
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build
        subprocess.run(
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/subprocess.py", line 512, in run
        raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py", line 76, in <module>
        setup(
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
        return distutils.core.setup(**attrs)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/core.py", line 148, in setup
        dist.run_commands()
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 966, in run_commands
        self.run_command(cmd)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/command/install.py", line 61, in run
        return orig.install.run(self)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/install.py", line 545, in run
        self.run_command('build')
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build.py", line 135, in run
        self.run_command(cmd_name)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
        _build_ext.run(self)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 340, in run
        self.build_extensions()
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions
        build_ext.build_extensions(self)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
        self._build_extensions_serial()
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
        self.build_extension(ext)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
        _build_ext.build_extension(self, ext)
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
        objects = self.compiler.compile(sources,
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 491, in unix_wrap_ninja_compile
        _write_ninja_file_and_compile_objects(
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1250, in _write_ninja_file_and_compile_objects
        _run_ninja_build(
      File "/home/shrek/miniconda3/envs/cleargrasp/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build
        raise RuntimeError(message) from e
    RuntimeError: Error compiling objects for extension
    ----------------------------------------
ERROR: Command errored out with exit status 1: /home/shrek/miniconda3/envs/cleargrasp/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4hl0w9tz/chamferdist_0245ab78b384422cbd145cbe3630c19d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-cavgkf1w/install-record.txt --single-version-externally-managed --compile --install-headers /home/shrek/miniconda3/envs/cleargrasp/include/python3.8/chamferdist Check the logs for full command output.

manual installation fails

Hello,

I have a conda environment with torch1.6 and cudatoolkit 10.2.89
However on my machine nvcc --version shows V10.1.243

When i have this environment activate and clone the repository and run python setup.py install I get the following error.

running install
running bdist_egg
running egg_info
creating chamferdist.egg-info
writing chamferdist.egg-info/PKG-INFO
writing dependency_links to chamferdist.egg-info/dependency_links.txt
writing requirements to chamferdist.egg-info/requires.txt
writing top-level names to chamferdist.egg-info/top_level.txt
writing manifest file 'chamferdist.egg-info/SOURCES.txt'
reading manifest file 'chamferdist.egg-info/SOURCES.txt'
writing manifest file 'chamferdist.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/chamfer.py -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/version.py -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/__init__.py -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/knn.cu -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/dispatch.cuh -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/index_utils.cuh -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/mink.cuh -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/cutils.h -> build/lib.linux-x86_64-3.7/chamferdist
copying chamferdist/knn.h -> build/lib.linux-x86_64-3.7/chamferdist
running build_ext
building 'chamferdist._C' extension
creating /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7
creating /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist
Emitting ninja build file /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /usr/bin/nvcc -DWITH_CUDA -Ichamferdist -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/TH -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/THC -I/home/christina/miniconda3/envs/py3-mink/include/python3.7m -c -c /media/christina/Data/coding/chamferdist/chamferdist/knn.cu -o /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=sm_61 -std=c++14
FAILED: /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/knn.o 
/usr/bin/nvcc -DWITH_CUDA -Ichamferdist -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/TH -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/THC -I/home/christina/miniconda3/envs/py3-mink/include/python3.7m -c -c /media/christina/Data/coding/chamferdist/chamferdist/knn.cu -o /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/knn.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=sm_61 -std=c++14
/usr/include/c++/8/utility(307): error: pack expansion does not make use of any argument packs

/usr/include/c++/8/utility(329): error: pack expansion does not make use of any argument packs

/usr/include/c++/8/utility(329): error: expected a ">"
          detected during instantiation of type "std::make_integer_sequence<std::size_t, _Num>" 
(340): here

/usr/include/c++/8/utility(307): error: identifier "__integer_pack" is undefined
          detected during:
            instantiation of class "std::_Build_index_tuple<_Num> [with _Num=1UL]" 
/usr/include/c++/8/functional(389): here
            instantiation of class "std::_Bind<_Functor (_Bound_args...)> [with _Functor=lambda [](std::function<c10::IValue ()>)->void, _Bound_args=<std::function<c10::IValue ()>>]" 
/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h(361): here

/usr/include/c++/8/utility(307): error: expected a ">"
          detected during:
            instantiation of class "std::_Build_index_tuple<_Num> [with _Num=1UL]" 
/usr/include/c++/8/functional(389): here
            instantiation of class "std::_Bind<_Functor (_Bound_args...)> [with _Functor=lambda [](std::function<c10::IValue ()>)->void, _Bound_args=<std::function<c10::IValue ()>>]" 
/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h(361): here

/usr/include/c++/8/functional(482): error: no instance of function template "std::_Bind<_Functor (_Bound_args...)>::__call [with _Functor=lambda [](std::function<c10::IValue ()>)->void, _Bound_args=<std::function<c10::IValue ()>>]" matches the argument list
            argument types are: (std::tuple<>, <error-type>)
            object type is: std::_Bind<lambda [](std::function<c10::IValue ()>)->void (std::function<c10::IValue ()>)>
          detected during:
            instantiation of "_Result std::_Bind<_Functor (_Bound_args...)>::operator()(_Args &&...) [with _Functor=lambda [](std::function<c10::IValue ()>)->void, _Bound_args=<std::function<c10::IValue ()>>, _Args=<>, _Result=void]" 
/usr/include/c++/8/bits/std_function.h(298): here
            instantiation of "void std::_Function_handler<void (_ArgTypes...), _Functor>::_M_invoke(const std::_Any_data &, _ArgTypes &&...) [with _Functor=std::_Bind<lambda [](std::function<c10::IValue ()>)->void (std::function<c10::IValue ()>)>, _ArgTypes=<>]" 
/usr/include/c++/8/bits/std_function.h(675): here
            instantiation of "std::function<_Res (_ArgTypes...)>::function(_Functor) [with _Res=void, _ArgTypes=<>, _Functor=std::_Bind<lambda [](std::function<c10::IValue ()>)->void (std::function<c10::IValue ()>)>, <unnamed>=void, <unnamed>=void]" 
/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/core/ivalue_inl.h(353): here

6 errors detected in the compilation of "/tmp/tmpxft_0001519b_00000000-6_knn.cpp1.ii".
[2/3] c++ -MMD -MF /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/knn_cpu.o.d -pthread -B /opt/anaconda1anaconda2anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ichamferdist -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/TH -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/THC -I/home/christina/miniconda3/envs/py3-mink/include/python3.7m -c -c /media/christina/Data/coding/chamferdist/chamferdist/knn_cpu.cpp -o /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/knn_cpu.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /media/christina/Data/coding/chamferdist/chamferdist/knn_cpu.cpp:3:
/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
 #pragma omp parallel for if ((end - begin) >= grain_size)
 
[3/3] c++ -MMD -MF /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/ext.o.d -pthread -B /opt/anaconda1anaconda2anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ichamferdist -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/TH -I/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/THC -I/home/christina/miniconda3/envs/py3-mink/include/python3.7m -c -c /media/christina/Data/coding/chamferdist/chamferdist/ext.cpp -o /media/christina/Data/coding/chamferdist/build/temp.linux-x86_64-3.7/chamferdist/ext.o -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                 from /home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /media/christina/Data/coding/chamferdist/chamferdist/ext.cpp:1:
/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
 #pragma omp parallel for if ((end - begin) >= grain_size)
 
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build
    env=env)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "setup.py", line 88, in <module>
    cmdclass={"build_ext": BuildExtension},
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/__init__.py", line 163, in setup
    return distutils.core.setup(**attrs)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run
    self.do_egg_install()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 175, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 161, in call_command
    self.run_command(cmdname)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/command/install_lib.py", line 107, in build
    self.run_command('build_ext')
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run
    _build_ext.run(self)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions
    build_ext.build_extensions(self)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
    self._build_extensions_serial()
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
    depends=ext.depends)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile
    with_cuda=with_cuda)
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
    error_prefix='Error compiling objects for extension')
  File "/home/christina/miniconda3/envs/py3-mink/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build
    raise RuntimeError(message)
RuntimeError: Error compiling objects for extension

Do you have any hint of what the issue is?

Thanks

error when print(idx1)


RuntimeError Traceback (most recent call last)
in ()
----> 1 print(idx1)

4 frames
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in repr(self)
128 # characters to replace unicode characters with.
129 if sys.version_info > (3,):
--> 130 return torch._tensor_str._str(self)
131 else:
132 if hasattr(sys.stdout, 'encoding'):

/usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _str(self)
309 tensor_str = _tensor_str(self.to_dense(), indent)
310 else:
--> 311 tensor_str = _tensor_str(self, indent)
312
313 if self.layout != torch.strided:

/usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _tensor_str(self, indent)
207 if self.dtype is torch.float16 or self.dtype is torch.bfloat16:
208 self = self.float()
--> 209 formatter = _Formatter(get_summarized_data(self) if summarize else self)
210 return _tensor_str_with_formatter(self, indent, formatter, summarize)
211

/usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in init(self, tensor)
81 if not self.floating_dtype:
82 for value in tensor_view:
---> 83 value_str = '{}'.format(value)
84 self.max_width = max(self.max_width, len(value_str))
85

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in format(self, format_spec)
375 def format(self, format_spec):
376 if self.dim() == 0:
--> 377 return self.item().format(format_spec)
378 return object.format(self, format_spec)
379

RuntimeError: CUDA error: an illegal memory access was encountered

Torch dependency

@krrish94 Your package depends on the torch library but it is not listed in setup.py. Can you add that dependency to your setup.py?

Fail to install.

Hi~ I find this work really will solve my problem.But I failed when installing.

When I use pip to install, it comes up like below.
image

When I install with source code, it comes nvcc fatal
nvcc fatal : Unsupported gpu architecture 'compute_75' error: command '/usr/local/cuda-9.0/bin/nvcc' failed with exit status 1
But Im sure my cuda works fine. Can you please tell me where it maybe wrong?

System information: Ubuntu 16.04 python3.6 pytorch1.3

CUDA version (11.8) mismatches

Hi, trying to install it.

But
"""
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions
"""

I canot change to newer version of CUDA toolkit.

Is there a way to compile the source with this CUDA version?

Thanks in advance to who answer!!!

Order of the points matters

Hi, Thanks a lot for sharing your code here.

I have one question regarding the forward pass of the chamfer distance. In my experiments, I found that reordering the input point clouds will result in different chamfer distance results. Is it normal to happen? Is there some approximation happening in the code?

From what I understand in this paper: A Point Set Generation Network for 3D Object Reconstruction from a Single Image
The chamfer distance is defined agnostic to the ordering of the input points.
Would you mind clarifying this point?

Thanks!

Typo in Readme

I think instead of this

from chamferdist import ChamferDist

it should be

from chamferdist import ChamferDistance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.