Comments (2)
Same problem here. I build from source because this is a bit older centos cluster with a slightly older libc++ which causes issues with the normal pip install bitsandbytes
initializing the modules for CUDA use:
module use /opt/insy/modulefiles
module load cuda/12.4
module use cuda/12.4
module load cudnn/12-8.9.1.23
module use cudnn/12-8.9.1.23
module load devtoolset
module use devtoolset
I built cmake the latest from sources aswell.
Building on cluster following the manuals
cd /tmp/bitsandbytes
make clean
cmake -Bstatic -DCOMPUTE_BACKEND=cuda -S .
make
pip install .
python -m bitsandbytes
now gives
Could not find the bitsandbytes CUDA binary at PosixPath('/tmp/bitsandbytes/bitsandbytes/libbitsandbytes_cuda121_nocublaslt.so')
Could not load bitsandbytes native library: /tmp/bitsandbytes/bitsandbytes/libbitsandbytes_cpu.so: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/tmp/bitsandbytes/bitsandbytes/cextension.py", line 109, in <module>
lib = get_native_library()
File "/tmp/bitsandbytes/bitsandbytes/cextension.py", line 96, in get_native_library
dll = ct.cdll.LoadLibrary(str(binary_path))
File "/home/nfs/wpasman/bin/Python-3.10.13/Lib/ctypes/__init__.py", line 452, in LoadLibrary
return self._dlltype(name)
File "/home/nfs/wpasman/bin/Python-3.10.13/Lib/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /tmp/bitsandbytes/bitsandbytes/libbitsandbytes_cpu.so: cannot open shared object file: No such file or directory
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
CUDA specs: CUDASpecs(highest_compute_capability=(5, 0), cuda_version_string='121', cuda_version_tuple=(12, 1))
PyTorch settings found: CUDA_VERSION=121, Highest Compute Capability: (5, 0).
Library not found: /tmp/bitsandbytes/bitsandbytes/libbitsandbytes_cuda121_nocublaslt.so. Maybe you need to compile it from source?
If you compiled from source, try again with `make CUDA_VERSION=DETECTED_CUDA_VERSION`,
for example, `make CUDA_VERSION=113`.
The CUDA version for the compile might depend on your conda install, if using conda.
Inspect CUDA version via `conda list | grep cuda`.
To manually override the PyTorch CUDA version please see: https://github.com/TimDettmers/bitsandbytes/blob/main/docs/source/nonpytorchcuda.mdx
WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
If you run into issues with 8-bit matmul, you can try 4-bit quantization:
https://huggingface.co/blog/4bit-transformers-bitsandbytes
The directory listed in your path is found to be non-existent: /opt/insy/devtoolset/11/root/usr/lib/dyninst
The directory listed in your path is found to be non-existent: /opt/insy/cudnn/12-8.9.1.23/lib64
The directory listed in your path is found to be non-existent: -L /home/nfs/wpasman/openssl/lib64 -Wl,-rpath,/home/nfs/wpasman/openssl/lib64
The directory listed in your path is found to be non-existent: /tmp/bitsandbytes/devtoolset
The directory listed in your path is found to be non-existent: 1;/tmp/bitsandbytes/cudnn/12-8.9.1.23
The directory listed in your path is found to be non-existent: 1;/tmp/bitsandbytes/cuda/12.4
The directory listed in your path is found to be non-existent: 1;/opt/insy/modulefiles
The directory listed in your path is found to be non-existent: 1;/etc/modulefiles
The directory listed in your path is found to be non-existent: 1;/usr/share/modulefiles
The directory listed in your path is found to be non-existent: 1;/usr/share/modulefiles/Linux
The directory listed in your path is found to be non-existent: 1;/usr/share/modulefiles/Core
The directory listed in your path is found to be non-existent: 1;/usr/share/lmod/lmod/modulefiles/Core
The directory listed in your path is found to be non-existent: /opt/insy/cudnn/12-8.9.1.23/lib64
The directory listed in your path is found to be non-existent: /usr/lib64/qt-3.3/include
The directory listed in your path is found to be non-existent: /tmp/bitsandbytes/devtoolset
The directory listed in your path is found to be non-existent: /tmp/bitsandbytes/cudnn/12-8.9.1.23
The directory listed in your path is found to be non-existent: /tmp/bitsandbytes/cuda/12.4
The directory listed in your path is found to be non-existent: /usr/share/modulefiles/Linux
The directory listed in your path is found to be non-existent: /usr/share/modulefiles/Core
The directory listed in your path is found to be non-existent: cuda/12.4
The directory listed in your path is found to be non-existent: cudnn/12-8.9.1.23
The directory listed in your path is found to be non-existent: devtoolset/11
The directory listed in your path is found to be non-existent: 1;/opt/insy/cuda/12.4/bin
The directory listed in your path is found to be non-existent: 1;/home/nfs/wpasman/openssl/bin
The directory listed in your path is found to be non-existent: 1;/home/nfs/wpasman/bin/Python-3.10.13
The directory listed in your path is found to be non-existent: 1;/home/nfs/wpasman/bin
The directory listed in your path is found to be non-existent: 1;/usr/lib64/qt-3.3/bin
The directory listed in your path is found to be non-existent: 1;/usr/local/bin
The directory listed in your path is found to be non-existent: 1;/usr/bin
The directory listed in your path is found to be non-existent: 1;/usr/local/sbin
The directory listed in your path is found to be non-existent: 1;/usr/sbin
The directory listed in your path is found to be non-existent: 1;/opt/insy/bin
The directory listed in your path is found to be non-existent: 1;/opt/insy/cuda/12.4/include
The directory listed in your path is found to be non-existent: /opt/insy/cudnn/12-8.9.1.23/lib64
The directory listed in your path is found to be non-existent: 1;/opt/insy/cuda/12.4/lib64
The directory listed in your path is found to be non-existent: /opt/insy/devtoolset/11/root/usr/lib64/pkgconfig
The directory listed in your path is found to be non-existent: 1;/opt/insy/devtoolset/11/root/usr/lib
The directory listed in your path is found to be non-existent: 1;/opt/insy/devtoolset/11/root/usr/lib64/dyninst
The directory listed in your path is found to be non-existent: 1;/opt/insy/devtoolset/11/root/usr/lib/dyninst
The directory listed in your path is found to be non-existent: 1;/opt/insy/cudnn/12-8.9.1.23/lib64
The directory listed in your path is found to be non-existent: 1;/opt/insy/cuda/12.4/lib64
The directory listed in your path is found to be non-existent: 1;/home/nfs/wpasman/openssl/lib64
The directory listed in your path is found to be non-existent: 1;/home/nfs/wpasman/bin
The directory listed in your path is found to be non-existent: 1;/opt/insy/cudnn/12-8.9.1.23/lib
The directory listed in your path is found to be non-existent: /opt/insy/devtoolset/11/root/usr/lib64/pkgconfig
The directory listed in your path is found to be non-existent: 1;/usr/share/lmod/lmod/share/man
Found duplicate CUDA runtime files (see below).
We select the PyTorch default CUDA runtime, which is 12.1,
but this might mismatch with the CUDA version that is needed for bitsandbytes.
To override this behavior set the `BNB_CUDA_VERSION=<version string, e.g. 122>` environmental variable.
For example, if you want to use the CUDA version 122,
BNB_CUDA_VERSION=122 python ...
OR set the environmental variable in your .bashrc:
export BNB_CUDA_VERSION=122
In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2,
* Found CUDA runtime at: /opt/insy/cuda/12.4/lib64/libcudart.so.12
* Found CUDA runtime at: /opt/insy/cuda/12.4/lib64/libcudart.so.12.4.99
* Found CUDA runtime at: /opt/insy/cuda/12.4/lib64/libcudart.so
* Found CUDA runtime at: /opt/insy/cuda/12.4/lib64/libcudart.so.12
* Found CUDA runtime at: /opt/insy/cuda/12.4/lib64/libcudart.so.12.4.99
* Found CUDA runtime at: /opt/insy/cuda/12.4/lib64/libcudart.so
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and CUDA is callable...
Couldn't load the bitsandbytes library, likely due to missing binaries.
Please ensure bitsandbytes is properly installed.
For source installations, compile the binaries with `cmake -DCOMPUTE_BACKEND=cuda -S .`.
See the documentation for more details if needed.
Trying a simple check anyway, but this will likely fail...
Traceback (most recent call last):
File "/tmp/bitsandbytes/bitsandbytes/diagnostics/main.py", line 66, in main
sanity_check()
File "/tmp/bitsandbytes/bitsandbytes/diagnostics/main.py", line 40, in sanity_check
adam.step()
File "/tmp/bitsandbytes/venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 385, in wrapper
out = func(*args, **kwargs)
File "/tmp/bitsandbytes/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/tmp/bitsandbytes/bitsandbytes/optim/optimizer.py", line 287, in step
self.update_step(group, p, gindex, pindex)
File "/tmp/bitsandbytes/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/tmp/bitsandbytes/bitsandbytes/optim/optimizer.py", line 496, in update_step
F.optimizer_update_32bit(
File "/tmp/bitsandbytes/bitsandbytes/functional.py", line 1584, in optimizer_update_32bit
optim_func = str2optimizer32bit[optimizer_name][0]
NameError: name 'str2optimizer32bit' is not defined
Above we output some debug information.
Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose
WARNING: Please be sure to sanitize sensitive info from the output before posting it.
from bitsandbytes.
@Wouter1 Since you've built with CUDA 12.4, try with export BNB_CUDA_VERSION=124
. In this case it should look for the build library at /tmp/bitsandbytes/bitsandbytes/libbitsandbytes_cuda124_nocublaslt.so
.
from bitsandbytes.
Related Issues (20)
- bitsandbytes interprets URLs from environment variables as paths HOT 2
- Bug issues
- error on VectorstoreIndexCreator HOT 6
- CONTRIBUTING.md references Meta CLA HOT 1
- bitsandbytes import error in colab HOT 1
- Could not run Kohya
- PicklingError: Can't pickle <function Embedding.forward at XXXXXXX> it's not the same object as torch.nn.modules.sparse.Embedding.forward
- AttributeError: 'NoneType' object has no attribute 'split' CUDA Setup failed despite CUDA being available.
- Mistral-v0.1 nf4 is not quantized into 4bit HOT 1
- RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information: HOT 3
- problem with loading my finetuned Llama2 model - type object 'Params4bit' has no attribute 'from_prequantized' HOT 3
- undefined symbol: cdequantize_blockwise_fp32 HOT 1
- AnimateDiff SDXL won't run
- AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4' HOT 1
- Could not load bitsandbytes native library: 'NoneType' object has no attribute 'split' HOT 1
- ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`
- RuntimeError: Something when wrong when trying to find file. Maybe you do not have a linux system? HOT 1
- Error in Windows HOT 2
- please provide python whel package in nvidia jetson agx orin (aarch64 + cuda) HOT 1
- Exact version match required between the system and PyTorch CUDA libraries for the compilation to succeed HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bitsandbytes.