GithubHelp home page GithubHelp logo

ucbrise / actnn Goto Github PK

View Code? Open in Web Editor NEW
196.0 7.0 30.0 180 KB

ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

License: MIT License

Python 86.58% C++ 4.03% Cuda 8.54% Dockerfile 0.06% Shell 0.36% C 0.43%

actnn's Issues

QConv1d: no valid convolution algorithms available in CuDNN

In https://github.com/ucbrise/actnn/blob/main/tests/test_conv_layer.py line 52-56

I just got this message when trying to run test_conv_layer.py, getting the following stacktrace:

~/code/actnn/tests$ CUDA_VISIBLE_DEVICES=1 python test_conv_layer.py
Conv1d(100, 4, kernel_size=(3,), stride=(2,), groups=2)
QConv1d(100, 4, kernel_size=(3,), stride=(2,), groups=2)
torch.Size([4, 50, 3])
torch.Size([10, 100, 2000]) tensor([2, 0, 3, 0, 0, 2, 3, 0, 2, 1], device='cuda:0')
Traceback (most recent call last):
  File "test_conv_layer.py", line 60, in <module>
    test(layer, qlayer, x, y)
  File "test_conv_layer.py", line 33, in test
    grads.append(get_grad(qlayer))
  File "test_conv_layer.py", line 27, in get_grad
    loss.backward()
  File "/data/users/root/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/data/users/root/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
    allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
  File "/data/users/root/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/autograd/function.py", line 89, in apply
    return self._forward_cls.backward(self, *args)  # type: ignore
  File "/data/users/root/code/actnn/actnn/actnn/ops.py", line 244, in backward
    return convnd.run_backward(1, ctx, grad_output, [0, 2], _single)
  File "/data/users/root/code/actnn/actnn/actnn/ops.py", line 225, in run_backward
    [ctx.needs_input_grad[0], ctx.needs_input_grad[1]])
RuntimeError: no valid convolution algorithms available in CuDNN

There is something wrong with loss.backward()

I just modify the model by

model = actnn.QModule(model)

After that, something wrong happened as follows:

Traceback (most recent call last):
File "train.py", line 336, in
main()
File "train.py", line 332, in main
train(args, model)
File "train.py", line 212, in train
loss.backward()
File "/home/hku/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/hku/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/autograd/init.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function linearBackward returned an invalid gradient at index 0 - got [25216, 3072] but expected shape compatible with [128, 197, 3072]

Cannot save memory during the FP

Hi, part of my work now follows the project and I try to quantize the activation during the forward pass.
However, i just noticed that though i can modify the ctx.saved_tensor to be my new compressed activation, the overall cuda memory occupation doesn't decrease even increase.
What i found is that the original fp32 activation will not be freed and still be count as part of the cuda memory usage. I just wonder the reason behind this and seeks for a solution.

For your reference, what i did is like

def forward(self, input):
      qconv2d.apply(input, weight, .....)

Where inside the qconv2d, i did

input_int8, scale_inp = quantize_int8(input)
...
ctx.save_for_backward(input_int8, scale_inp, weight, bias)

However, using the torch.cuda.memory_allocated(0), the input & input_int8 will be both saved during the path?

Hoping for your reply.

install problem

hi when install actnn, i meet pip._internal.exceptions.InstallationError: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode. any help is appreciated

When I intalling actnn on ubuntu18.04, something wrong with it.

When I intalling actnn on ubuntu18.04, something wrong with it.
` /usr/local/cuda-10.1/bin/nvcc -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/TH -I/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/zhaofy/anaconda3/envs/actnn/include/python3.8 -c actnn/cpp_extension/minimax_cuda_kernel.cu -o build/temp.linux-x86_64-3.8/actnn/cpp_extension/minimax_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=minimax -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/builtin_function.h(97): warning: statement is unreachable

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(191): warning: statement is unreachable

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/boxing/impl/boxing.h(100): warning: integer conversion resulted in a change of sign

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/op_registration/op_whitelist.h(39): warning: integer conversion resulted in a change of sign

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/ATen/core/builtin_function.h(97): warning: statement is unreachable

/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(191): warning: statement is unreachable

/usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/include/c++/7/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.h:5042:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.h:5063:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/include/c++/7/bits/basic_string.h:6688:95:   required from here
/usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object
       __p->_M_set_sharable();
       ~~~~~~~~~^~
/usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/include/c++/7/bits/basic_string.tcc:578:28:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.h:5042:20:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.h:5063:24:   required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.tcc:656:134:   required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/include/c++/7/bits/basic_string.h:6693:95:   required from here
/usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object
/home/zhaofy/anaconda3/envs/actnn/lib/python3.8/site-packages/torch/utils/cpp_extension.py:352: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
  warnings.warn(msg.format('we could not find ninja.'))
error: command '/usr/local/cuda-10.1/bin/nvcc' failed with exit status 1

ERROR: Command errored out with exit status 1: /home/zhaofy/anaconda3/envs/actnn/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/zhaofy/actnn-main/actnn/setup.py'"'"'; file='"'"'/home/zhaofy/actnn-main/actnn/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
`

How Actnn used in windows 10?

Hello, thanks for your work. I tried actnn in ubuntu, it was worked. But when it was used in windows 10, I get a error. Could you please help me to solve this problem?
image

how to deploy actnn by libtorch?

It's really an interesting work!I just wonder how can we deploy actnn using libtorch. With libtorch or onnx( from python to c++),it will make actnn more useful.Thank you~

the memory consume question

Hi, I am trying to use actnn on transformer models, and I am testing it on a simple nn.linear module:

import torch
import torch.nn as nn
import torch.nn.functional as F
import actnn
from actnn import config, QScheme, QModule
class GEGLU(nn.Module):
    def __init__(self, dim_in: int, dim_out: int):
        super().__init__()
        # self.proj = LoRACompatibleLinear(dim_in, dim_out)
        self.proj = nn.Linear(dim_in, dim_out)

    def gelu(self, gate):
        if gate.device.type != "mps":
            return F.gelu(gate)
        # mps: gelu is not implemented for float16
        return F.gelu(gate.to(dtype=torch.float32)).to(dtype=gate.dtype)

    def forward(self, hidden_states):

        tmp = self.proj(hidden_states)
        # print((tmp.numel()-hidden_states.numel())*4/1e6)
        hidden_states, gate = tmp.chunk(2, dim=-1)
        # import pdb;pdb.set_trace()

        return hidden_states * self.gelu(gate)


def test_m():
    model = GEGLU(640, 5120)
    model = QModule(model)
    model.cuda()

    inp = torch.rand(128, 2304, 640).cuda()

    _ = model(torch.rand(2, 2304, 640).cuda())
    # out.mean().backward()

    beg = torch.cuda.memory_allocated()/1e6

    out = model(inp)
    print("memory:", torch.cuda.memory_allocated()/1e6-beg)
    # print(model.proj.weight.grad.numel()/1e6)
    # out.mean().backward()

actnn.set_optimization_level("L3")

test_m()

How ever, the memory consume I see through the code above does not change when I use or comment model = QModule(model),
from example:

  • with Qmodule: 2843.49MB
  • without Qmodule: 2829MB

I printed in actnn/actnn/ops.py how much memory was saved after quantized = quantize_activation(input, scheme) , the quantized size was much smaller than input size, there should be about 700MB saved, but I didn't see this difference on the result above.

Installing actnn with some errors

When I intalling actnn on ubuntu18.04, something wrong with it.

/usr/include/c++/7/bits/basic_string.h:6693:95: required from here /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1539, in _run_ninja_build env=env) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/huangry/program/actnn/actnn/setup.py", line 24, in <module> packages=find_packages() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/develop.py", line 136, in install_for_development self.run_command('build_ext') File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions build_ext.build_extensions(self) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 196, in build_extension _build_ext.build_extension(self, ext) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension depends=ext.depends) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 500, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1255, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/home/huangry/miniconda3/envs/py37/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension

how to avoid memory fragmentation in ActNN?

May I known how you guys implemented this defragmentation in ActNN?
wecom-temp-63c4a7a412756448a2179e2b801edcd5

In my model training experience: smaller MAX_SPLIT_SIZE, worse performance. bigger MAX_SPLIT_SIZE, will finally result OOM

what is pipeline_threshold used for?

I want to figure out what does pipeline and pipeline_threshold means in actnn. Din't find examples in test or readme, so may you guys give some examples or explain it a bit? Thanks
image

PS: I'm currently reading actnn source code and learned a lot from it , my chinese notes here.

not support ceil_mode

RuntimeError: Expected ceil_mode == false to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

There are some errors when I install actnn

Hello!
Thanks for your excellent work! I think it is useful for me.
But there are some errors when I install actnn.

D:/actnn/actnn/actnn/cpp_extension/minimax_cuda_kernel.cu(19): error: more than one instance of overloaded function "__shfl_sync" matches the argument list:
                function "__shfl_sync(unsigned int, __half, int, int)"
                function "__shfl_sync(unsigned int, c10::Half, unsigned int, int)"
                argument types are: (const unsigned int, __half, const unsigned int, const int)

D:/actnn/actnn/actnn/cpp_extension/minimax_cuda_kernel.cu(52): error: more than one instance of overloaded function "__shfl_sync" matches the argument list:
            function "__shfl_sync(unsigned int, int, int, int)"
            function "__shfl_sync(unsigned int, unsigned int, int, int)"
            function "__shfl_sync(unsigned int, float, int, int)"
            function "__shfl_sync(unsigned int, long long, int, int)"
            function "__shfl_sync(unsigned int, unsigned long long, int, int)"
            function "__shfl_sync(unsigned int, double, int, int)"
            function "__shfl_sync(unsigned int, long, int, int)"
            function "__shfl_sync(unsigned int, unsigned long, int, int)"
            function "__shfl_sync(unsigned int, __half, int, int)"
            function "__shfl_sync(unsigned int, c10::Half, unsigned int, int)"
            argument types are: (unsigned int, c10::Half, int, int)
          detected during instantiation of "void minimax_cuda_kernel(const scalar_t *, scalar_t *, scalar_t *, int64_t, int64_t) [with scalar_t=c10::Half]"
(82): here

D:/actnn/actnn/actnn/cpp_extension/minimax_cuda_kernel.cu(65): error: more than one instance of overloaded function "__shfl_sync" matches the argument list:
            function "__shfl_sync(unsigned int, int, int, int)"
            function "__shfl_sync(unsigned int, unsigned int, int, int)"
            function "__shfl_sync(unsigned int, float, int, int)"
            function "__shfl_sync(unsigned int, long long, int, int)"
            function "__shfl_sync(unsigned int, unsigned long long, int, int)"
            function "__shfl_sync(unsigned int, double, int, int)"
            function "__shfl_sync(unsigned int, long, int, int)"
            function "__shfl_sync(unsigned int, unsigned long, int, int)"
            function "__shfl_sync(unsigned int, __half, int, int)"
            function "__shfl_sync(unsigned int, c10::Half, unsigned int, int)"
            argument types are: (unsigned int, c10::Half, int, int)
          detected during instantiation of "void minimax_cuda_kernel(const scalar_t *, scalar_t *, scalar_t *, int64_t, int64_t) [with scalar_t=c10::Half]"
(82): here

3 errors detected in the compilation of "C:/Users/xJun/AppData/Local/Temp/tmpxft_00004d44_00000000-7_minimax_cuda_kernel.cpp1.ii".
D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(126): error: no instance of overloaded function "std::min" matches the argument list
            argument types are: (long long, long)

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(190): error: no instance of overloaded function "std::min" matches the argument list
            argument types are: (long long, long)

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(302): error: no instance of overloaded function "std::min" matches the argument list
            argument types are: (long long, long)

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(379): error: no instance of overloaded function "std::min" matches the argument list
            argument types are: (long long, long)

4 errors detected in the compilation of "C:/Users/xJun/AppData/Local/Temp/tmpxft_000024d8_00000000-7_quantization_cuda_kernel.cpp1.ii".
D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_mixed_precision_kernel<double> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_mixed_precision_kernel<float> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_mixed_precision_kernel< ::c10::Half> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(64): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<double, (bool)0> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<float, (bool)0> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel< ::c10::Half, (bool)0> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<double, (bool)1> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel<float, (bool)1> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: calling a __host__ function("fmax<double, float, (int)0> ") from a __global__ function("pack_single_precision_kernel< ::c10::Half, (bool)1> ") is not allowed

D:/actnn/actnn/actnn/cpp_extension/quantization_cuda_kernel.cu(252): error: identifier "fmax<double, float, (int)0> " is undefined in device code

18 errors detected in the compilation of "C:/Users/xJun/AppData/Local/Temp/tmpxft_00004cdc_00000000-7_quantization_cuda_kernel.cpp1.ii".

Transformer Benchmarks?

Just curious, have you tested this method on transformer-like benchmarks like BERT etc and test the quantization accuracy?

How to use it for GANs

Thank you for sharing your great work!

Can actNN be used for GAN models? I used actNN with the following GAN architecture, but I got the error.
https://github.com/knazeri/edge-connect

Traceback (most recent call last):
  File "train.py", line 3, in <module>
    main(mode=1)
  File "***\edge-connect\main.py", line 50, in main
    model = EdgeConnect(config)
  File "***\edge-connect\src\edge_connect.py", line 27, in __init__
    self.edge_model = EdgeModel(config).to(config.DEVICE)
  File "***\edge-connect\src\models.py", line 67, in __init__
    generator = actnn.QModule(generator)
  File "***\actnn\actnn\actnn\module.py", line 18, in __init__
    QModule.convert_layers(model)
  File "***\actnn\actnn\actnn\module.py", line 76, in convert_layers
    QModule.convert_layers(child)
  File "***\actnn\actnn\actnn\module.py", line 48, in convert_layers
    child.groups, child.bias, child.dilation, child.padding_mode))
  File "***\actnn\actnn\actnn\layers.py", line 137, in __init__
    padding, output_padding, groups, bias, dilation, padding_mode)
  File "***\Python37\lib\site-packages\torch\nn\modules\conv.py", line 904, in __init__
    True, output_padding, groups, bias, padding_mode, **factory_kwargs)
  File "***\Python37\lib\site-packages\torch\nn\modules\conv.py", line 602, in __init__
    groups, bias, padding_mode, **factory_kwargs)
  File "***\Python37\lib\site-packages\torch\nn\modules\conv.py", line 133, in __init__
    if bias:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous

I have add the following code to model.py#L61

        generator = EdgeGenerator(use_spectral_norm=True)
        discriminator = Discriminator(in_channels=2, use_sigmoid=config.GAN_LOSS != 'hinge')
        generator = actnn.QModule(generator)
        print(generator)
        exit()

I would like to seek your advice on this problem.
Thanks

[Feature] QDropout Implementation

Thanks for the great work.
I find that it doesn't give the implementation of QDropout, which is a common operator in NNs. I have implemented a version of QDropout based on the QReLU, could I submit my implementation via pull requests?

problem with torch._six on latest pytorch

Hi, thanks for your great work, but there's some wrong with latest torch.
When I'm trying to import actnn on pytorch1.9, I faced an error

>>>import actnn
Traceback (most recent call last):
File "", line 1, in
File "/workspace/actnn-main/actnn/actnn/__init__.py", line 1, in
from . import dataloader
File "/workspace/actnn-main/actnn/actnn/dataloader.py", line 16, in
from torch._six import queue, string_classes
ImportError: cannot import name 'queue' from 'torch._six' (/miniconda3/lib/python3.8/site-packages/torch/_six.py)

then I compare the difference with torch1.9 and torch1.7
there is a commit in torch and they remove some import in torch._six, then I revert torch._six to old version so that actnn can work well.

Maybe just directly import queue it could work well.
other import from torch._six should fix too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.