Comments (8)
Thank you for your valuable feedback. I have replicated the error and am actively seeking a resolution.
from chat-univi.
Okay found solution to this problem.
Windows is not well supported by DeepSpeed. We need to compile it manually.
Pull DeepSpeed repo:
git clone https://github.com/microsoft/DeepSpeed.git
cd DeepSpeed
# rollback files to DeepSpeed 0.9.5
# because in the previous step, the command line log shows that the version being installed is this one
# using higher version may be okay?
git reset --hard 8b7423d2
When compiling DeepSpeed, you may encounter a type conversion error between size_t
and _Ty
.
Just add (unsigned)
into line 536, 537, 545, 546, 1570 for csrc/transformer/inference/csrc/pt_binding.cpp
:
// line 536, 537
{hidden_dim * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
k * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
// line 545, 546
{hidden_dim * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
k * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
// line 1570
at::from_blob(intermediate_ptr, {input.size(0), input.size(1), (unsigned) mlp_1_out_neurons}, options);
Compile and install DeepSpeed
# use same conda env to Chat-UniVi is okay
conda activate chatunivi
# run build script for Windows
build_win.bat
# then should a deepspeed-0.9.5-....whl file be in dist folder
# install it with pip
pip install dist/deepspeed-0.9.5-....whl
Now pip install -e .
command of Chat-UniVi should run successfully.
Here is the .whl
file built on my machine.
It's built by WinSDK 10.0.22000.0 with CUDA 11.7.
Don't know if it can help you guys.
deepspeed-0.9.5+8b7423d2-cp310-cp310-win_amd64.whl.zip
Then another problem rises: Windows is not well supported by flash-attn
too. 🤣🤣
I'm trying to solve this now.
from chat-univi.
Thank you very much!
Are you planning to train the model on Windows? If you only intend to perform inference, there's no need to install deepspeed
and flash-attn
.
from chat-univi.
Not yet. I'm just trying to setup environment and evaluate inference performance.
Thanks for that information! I will continue with testing. 😆😆
from chat-univi.
Error occurs when running python main_demo_13B.py
:
logs
T:\Projects\Chat-UniVi\main_demo_7B.py:17: SyntaxWarning: "is not" with a literal. Did you mean "!="?
assert model_path is not ""
False
===================================BUG REPORT===================================
C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
warn(msg)
The following directories listed in your path were found to be non-existent: {WindowsPath('C'), WindowsPath('/Users/firok/.conda/envs/chatunivi/lib')}
C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\firok.conda\envs\chatunivi did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 7.5.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Traceback (most recent call last):
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\utils\import_utils.py", line 1099, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\firok.conda\envs\chatunivi\lib\importlib_init.py", line 126, in import_module
return _bootstrap.gcd_import(name[level:], package, level)
File "", line 1050, in gcd_import
File "", line 1027, in find_and_load
File "", line 1006, in find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\models\llama\modeling_llama.py", line 32, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\modeling_utils.py", line 38, in
from .deepspeed import deepspeed_config, is_deepspeed_zero3_enabled
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\deepspeed.py", line 37, in
from accelerate.utils.deepspeed import HfDeepSpeedConfig as DeepSpeedConfig
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\accelerate_init.py", line 3, in
from .accelerator import Accelerator
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\accelerate\accelerator.py", line 35, in
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\accelerate\checkpointing.py", line 24, in
from .utils import (
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\accelerate\utils_init.py", line 131, in
from .bnb import has_4bit_bnb_layers, load_and_quantize_model
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\accelerate\utils\bnb.py", line 42, in
import bitsandbytes as bnb
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes_init.py", line 6, in
from . import cuda_setup, utils, research
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\research_init.py", line 1, in
from . import nn
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\research\nn_init.py", line 1, in
from .modules import LinearFP8Mixed, LinearFP8Global
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in
from bitsandbytes.optim import GlobalOptimManager
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\optim_init.py", line 6, in
from bitsandbytes.cextension import COMPILED_WITH_CUDA
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\bitsandbytes\cextension.py", line 20, in
raise RuntimeError('''
RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "T:\Projects\Chat-UniVi\main_demo_7B.py", line 4, in
from ChatUniVi.conversation import conv_templates, Conversation
File "T:\Projects\Chat-UniVi\ChatUniVi_init_.py", line 1, in
from .model import ChatUniViLlamaForCausalLM
File "T:\Projects\Chat-UniVi\ChatUniVi\model_init_.py", line 1, in
from .language_model.llama import ChatUniViLlamaForCausalLM, ChatUniViConfig
File "T:\Projects\Chat-UniVi\ChatUniVi\model\language_model\llama.py", line 5, in
from transformers import AutoConfig, AutoModelForCausalLM,
File "", line 1075, in _handle_fromlist
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\utils\import_utils.py", line 1090, in getattr
value = getattr(module, name)
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\utils\import_utils.py", line 1089, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\firok.conda\envs\chatunivi\lib\site-packages\transformers\utils\import_utils.py", line 1101, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
According to this document, manually setting environment varibles (BNB_CUDA_VERSION
and LD_LIBRARY_PATH
) can solve that.
Then comes a lot of module not found errors.
Running pip install gradio transformers timm accelerate decord SentencePiece protobuf
with admin permission (dont know why. installation fails if no admin permission) helps.
I will continue to test tomorrow. 🚲
from chat-univi.
Adding offload_folder="offload"
param to AutoModelForCausalLM.from_pretrained
in model/builder.py
should fix OOM error of loading model.
line 75:
model = AutoModelForCausalLM.from_pretrained(model_path, offload_folder='offload', low_cpu_mem_usage=True, **kwargs)
line 82:
model = AutoModelForCausalLM.from_pretrained(model_base, offload_folder='offload', torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
line 92
model = AutoModelForCausalLM.from_pretrained(model_path, offload_folder='offload', low_cpu_mem_usage=True, **kwargs)
set CUDA_VISIBLE_DEVICES=0 && set BNB_CUDA_VERSION=117 && set LD_LIBRARY_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin" && uvicorn main_demo_7B:app --host 0.0.0.0 --port 8888
bitsandbytes
lib raise the CUDA Setup failed
error again. 😭
from chat-univi.
bitsandbytes
is used to speed up model inference and appears to be optional. However, I'm currently occupied with another deadline, and I plan to test the demo on Windows over this weekend.
from chat-univi.
Adding
offload_folder="offload"
param toAutoModelForCausalLM.from_pretrained
inmodel/builder.py
should fix OOM error of loading model.line 75:
model = AutoModelForCausalLM.from_pretrained(model_path, offload_folder='offload', low_cpu_mem_usage=True, **kwargs)line 82:
model = AutoModelForCausalLM.from_pretrained(model_base, offload_folder='offload', torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")line 92
model = AutoModelForCausalLM.from_pretrained(model_path, offload_folder='offload', low_cpu_mem_usage=True, **kwargs)set CUDA_VISIBLE_DEVICES=0 && set BNB_CUDA_VERSION=117 && set LD_LIBRARY_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin" && uvicorn main_demo_7B:app --host 0.0.0.0 --port 8888
bitsandbytes
lib raise theCUDA Setup failed
error again. 😭
Hello, I've encountered the same problem. It seems that bitsandbytes didn't support Windows very well.
You can try to uninstall it and then try pip install bitsandbytes-windows
.
from chat-univi.
Related Issues (20)
- Question about how to detect events in video HOT 2
- 16GB-VRAM run Chat-UniVi-7B-v1.5 model? HOT 1
- Where is `CGD/images`, `LA/images` and `SD/images`? HOT 1
- When will the training codes and model weights of `Chat-UniVi-7B v1.5` be released? HOT 2
- dpc implementation HOT 1
- Could you release the trained weights of the Stage-1 Training? HOT 2
- A bug of the latest commit ` add v1.5` HOT 2
- Hardware requirements for inference
- inference
- 【Bug】Training becomes pending if the training dataset contains text data. HOT 1
- Unable to Reproduce Training Process HOT 1
- huggingface demo error
- Is it a mistake to get patch_num_h from w? HOT 1
- ModuleNotFoundError: No module named 'ChatUniVi.model.language_model.phi' HOT 3
- Downloading the ActivityNet (Zero Shot) Dataset HOT 3
- Why `image_aspect_ratio` is set to `square` rather than `pad` HOT 1
- Hosting ChatUniVi in Sanic Application: ZeroDivisionError
- mm_projector.bin | size mismatch error
- Can this model apply a few-shot when inference?
- The image folder in the MIMIC_Imageonly
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chat-univi.