Comments (4)
I solved this problem, the bitsandbytes.0.43.0 from conda-forge is not work for me,
I downgraded to 0.42.0, by conda install bitsandbytes=0.42.0
, it works now!
The total steps to recover is below:
//create new conda env
conda create -n py312 python=3.12
conda install conda-forge::transformers conda-forge::accelerate conda-forge::bitsandbytes=0.42
python -m bitsandbytes
it should work now, the python -m bitsandbytes
output is below:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ /usr/local CUDA PATHS +++++++++++++++++++
/usr/local/cuda-11.8/targets/x86_64-linux/lib/stubs/libcuda.so
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudart.so
/usr/local/cuda-12.4/targets/x86_64-linux/lib/stubs/libcuda.so
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudart.so
/usr/local/cuda-12.2/targets/x86_64-linux/lib/stubs/libcuda.so
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudart.so
+++++++++++++++ WORKING DIRECTORY CUDA PATHS +++++++++++++++
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda110.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda110_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda111.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda111_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda114.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda115.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda115_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda117.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda118.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda120.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda120_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda121.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda121_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda122.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda122_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda123.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda123_nocublaslt.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/torch/lib/libc10_cuda.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so
/mnt/d/workspace/myenv/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so
++++++++++++++++++ LD_LIBRARY CUDA PATHS +++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
COMPILED_WITH_CUDA = True
COMPUTE_CAPABILITIES_PER_GPU = ['8.9']
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Running a quick check that:
+ library is importable
+ CUDA function is callable
WARNING: Please be sure to sanitize sensible info from any such env vars!
SUCCESS!
Installation was successful!
bitsandbytes.0.43.0 from conda-forge is now working , the error logs is below, I don't know what happed:
/home/donadjohn/miniconda3/envs/py311/lib/python3.11/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/donadjohn/miniconda3/envs/py311/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++ ANACONDA CUDA PATHS ++++++++++++++++++++
['/home/donadjohn/miniconda3/envs/py311/lib/libcudart.so']
++++++++++++++++++ /usr/local CUDA PATHS +++++++++++++++++++
[]
+++++++++++++++ WORKING DIRECTORY CUDA PATHS +++++++++++++++
[]
+++++ LD_LIBRARY_PATH /usr/local/cuda/lib64 CUDA PATHS +++++
['/usr/local/cuda/lib64/stubs/libcuda.so']
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
COMPILED_WITH_CUDA = False
COMPUTE_CAPABILITIES_PER_GPU = []
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and CUDA is callable...
WARNING: Please be sure to sanitize sensitive info from any such env vars!
Torch not compiled with CUDA enabled
Above we output some debug information. Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose ...
from bitsandbytes.
eh,
The fellowing code is still now working, but in venv it works, same code, same lib version, I am giving up, I am swiching to venv from conda:
>>> from transformers import AutoModelForCausalLM
>>>
>>> model = AutoModelForCausalLM.from_pretrained(
... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True
... )
The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/donadjohn/miniconda3/envs/py312/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/donadjohn/miniconda3/envs/py312/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3049, in from_pretrained
hf_quantizer.validate_environment(
File "/home/donadjohn/miniconda3/envs/py312/lib/python3.11/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 62, in validate_environment
raise ImportError(
ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`
from bitsandbytes.
OK, I downgraded python to 3.8, it works again,
I'm impressed.
from bitsandbytes.
I had the same issue that you using an Ubuntu 22.04 machine and a conda venv. On my end, I believe that the problem was due to dependency conflicts between cuda, pytorch and bitsandbytes.
I decided to reinstall everything on a new venv using python 3.11 :
conda create -y -n myenv python=3.11
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install transformers
pip install bitsandbytes
I installed the 0.43.0 version for bitsandbytes
According to the bitsandbytes documentation on HuggingFace: https://huggingface.co/docs/bitsandbytes/main/en/installation
I then followed this code snippet and changed the version to my version (118) :
wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh
# Syntax cuda_install CUDA_VERSION INSTALL_PREFIX EXPORT_TO_BASH
# CUDA_VERSION in {110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121, 122, 123, 124}
# EXPORT_TO_BASH in {0, 1} with 0=False and 1=True
# For example, the following installs CUDA 11.7 to ~/local/cuda-11.7 and exports the path to your .bashrc
bash install_cuda.sh 117 ~/local 1
After running this my code was compiling without any issues
from bitsandbytes.
Related Issues (20)
- NameError: name 'str2optimizer32bit' is not defined HOT 2
- torch compile support? HOT 1
- 32 bit optimizer update error despite gradients being the same HOT 4
- Quantized model using load_in_8bit produces very different results on T4 vs V100 GPU on Colab
- NameError: name 'str2optimizer32bit' is not defined HOT 1
- CUDA Setup failed despite CUDA being Available :: NameError: name 'str2optimizer32bit' is not defined HOT 4
- bitsandbytes interprets URLs from environment variables as paths HOT 2
- Bug issues
- error on VectorstoreIndexCreator HOT 6
- CONTRIBUTING.md references Meta CLA HOT 1
- bitsandbytes import error in colab HOT 1
- Could not run Kohya
- PicklingError: Can't pickle <function Embedding.forward at XXXXXXX> it's not the same object as torch.nn.modules.sparse.Embedding.forward
- AttributeError: 'NoneType' object has no attribute 'split' CUDA Setup failed despite CUDA being available.
- Mistral-v0.1 nf4 is not quantized into 4bit HOT 1
- RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information: HOT 3
- problem with loading my finetuned Llama2 model - type object 'Params4bit' has no attribute 'from_prequantized' HOT 3
- undefined symbol: cdequantize_blockwise_fp32 HOT 1
- AnimateDiff SDXL won't run
- AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4' HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bitsandbytes.