GithubHelp home page GithubHelp logo

alexandrosstergiou / adapool Goto Github PK

View Code? Open in Web Editor NEW
77.0 2.0 8.0 38.33 MB

[T-IP 2023] Code for exponential adaptive pooling for PyTorch

License: MIT License

Python 27.94% C++ 11.15% Cuda 60.86% Makefile 0.04%

adapool's Issues

The version of pytorch and how to deal with `nan_to_num` function in lower versions

Thank you for this amazing project. I saw it from SoftPool.
After installing it, make test, but I got AttributeError: module 'torch' has no attribute 'nan_to_num', after I checked, this function used in idea.py was introduced in Pytorch 1.8.0, so the torch version in the README may need to be updated, or is there an easy way to be compatible with lower versions?

Installation issue on Google Colab

Hi,
Thanks for providing a Cuda optimized implementation. While building the lib I encountered an issue with "inf" at limits.cuh.

CUDA/limits.cuh(119): error: identifier "inf" is undefined

CUDA/limits.cuh(120): error: identifier "inf" is undefined

CUDA/limits.cuh(128): error: identifier "inf" is undefined

CUDA/limits.cuh(129): error: identifier "inf" is undefined

4 errors detected in the compilation of "CUDA/adapool_cuda_kernel.cu".
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
Makefile:2: recipe for target 'install' failed
make: *** [install] Error 1

The following notebook provides more details with environment informations:
https://colab.research.google.com/drive/1T6Nxe2qbjKxXzo2IimFMYBn52qbthlZB?usp=sharing

反向传播的报错

File "/home/user/anaconda3/envs/torch_py38/lib/python3.8/site-packages/adaPool-0.1-py3.8-linux-x86_64.egg/adaPool/idea.py", line 206, in backward
adapool_cuda.backward_1d_em(*saved)
RuntimeError: input.is_contiguous()INTERNAL ASSERT FAILED at "CUDA/adapool_cuda.cpp":348, please report a bug to PyTorch. input must be a contiguous tensor

你好

你好,可以提供Python版本的代码吗 谢谢

input must be a CUDA tensor error when using AdaPool3d

Great work!
However, when i'm using AdaPool3d, and i encoutered the error below:

Traceback (most recent call last):
  File "<masked>/main.py", line 339, in <module>
    main()
  File "<masked>/main.py", line 329, in main
    train_loss, train_acc = train(model, train_loader, epoch, criterion, optimizer)
  File "<masked>/main.py", line 113, in train
    outputs = model(inputs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 183, in forward
    return self.module(*inputs[0], **module_kwargs[0])
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "<masked>/models/cnn3d_adapool.py", line 52, in forward
    x = self.pool(x)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/adaPool-0.1-py3.9-linux-x86_64.egg/adaPool/idea.py", line 1495, in forward
    return adapool3d(x, beta=self.beta, kernel_size=self.kernel_size, stride=self.stride, return_mask=self.return_mask, native=self.native)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/adaPool-0.1-py3.9-linux-x86_64.egg/adaPool/idea.py", line 771, in adapool3d
    x = beta*CUDA_ADAPOOL3d_EDSCW.apply(x, kernel_size, stride, return_mask) + (1. - beta)*CUDA_ADAPOOL3d_EM.apply(x, kernel_size, stride, return_mask)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/autograd/function.py", line 553, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/torch/cuda/amp/autocast_mode.py", line 123, in decorate_fwd
    return fwd(*args, **kwargs)
  File "<masked>/miniconda3/envs/python39/lib/python3.9/site-packages/adaPool-0.1-py3.9-linux-x86_64.egg/adaPool/idea.py", line 505, in forward
    adapool_cuda.forward_3d_edscw(input.contiguous(), kernel, stride, output, return_mask, mask)
RuntimeError: input.is_cuda() INTERNAL ASSERT FAILED at "CUDA/adapool_cuda.cpp":616, please report a bug to PyTorch. input must be a CUDA tensor

I've checked the if the input is on the cuda by printing input.is_cuda just before the adapool_cuda.forward_3d_edscw function call, and it displays True. But, when it comes to the cpp file, the input became NOT a CUDA tensor. I'm really comfused about that. Hope to receive your reply soon. Thanks!

Does AdaPool2d's beta require fixed image size?

I'm currently running AdaPool2d as a replacement of MaxPool2d in Resnet's stem similar on how you did it in SoftPool. However, I keep on getting an assertionError in line 1325 as shown below:

assert isinstance(beta, tuple) or torch.is_tensor(beta), 'Agument `beta` can only be initialized with Tuple or Tensor type objects and should correspond to size (oH, oW)'

Does this mean beta requires a fixed image size, e.g. (224,244)? Or is there a way to make it adaptive across varying image size (e.g. object detection)?

How to set the beta?

Hi, thanks for your nice work! However, I am confusing on how to set the beta

image
image

Solution: Unresolved extern function '_Z3powdi'”

cuda11. 0

When I tried to build your project on win10, I encountered the following problems:
“ptxas fatal : Unresolved extern function '_Z3powdi'”

Reason: Wrong use of pow function in Cu code
Solution: for example, pow (x, 2) can be changed to X * X

pytorch

can adapool function completely implement in torch,without any cpp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.