Comments (7)
using 6.7B:
Tue Jan 10 05:19:36 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... On | 00000000:05:00.0 Off | 0 |
| N/A 31C P0 32W / 250W | 13062MiB / 16384MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-PCIE... On | 00000000:42:00.0 Off | 0 |
| N/A 32C P0 26W / 250W | 2MiB / 16384MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 17510 C python 13060MiB |
+-----------------------------------------------------------------------------+
It's only bound to GPU 0.
from text-generation-webui.
I don't have a multi-GPU machine to test, so I need your help:
- Is server.py loading the model using line 36 or line 46? Can you add print statements before each of those lines to check?
- If it is using line 46, can you try changing it to this?
model = AutoModelForCausalLM.from_pretrained(Path(f"models/{model_name}"), device_map='auto')
I am not sure if the transformers library supports splitting models across multiple GPUs out of the box, but this might work.
from text-generation-webui.
It 'works' but not the way you want:
No more exception:
Loading KoboldAI_OPT-13B-Erebus...
loaded on line 46
Loaded the model in 65.22 seconds.
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
but it's not allocating memory across the GPUs correctly:
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 21306 C python 8068MiB |
| 1 N/A N/A 21306 C python 8504MiB |
+-----------------------------------------------------------------------------+
As well as responses take forever to return.
from text-generation-webui.
Unfortunately, it seems that device_map="auto"
is the only default solution provided by transformers at the moment: huggingface/transformers#15799 (comment)
from text-generation-webui.
Can you try this?
model = AutoModelForCausalLM.from_pretrained(Path(f"models/{model_name}"), torch_dtype=torch.float16, device_map='auto')
Maybe it is trying to load the model in 32-bit mode and sending part of the layers to the CPU.
from text-generation-webui.
Much better:
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 33094 C python 13266MiB |
| 1 N/A N/A 33094 C python 12730MiB |
+-----------------------------------------------------------------------------+
It's able to answer questions, albeit not fast on my hardware:
Input:
Question: Who is Asuka?
Output:
I think it's a character from Evangelion. I'm not really into anime so that was just what my friend told me once when we were talking about this stuff and she seemed to know her name, but then again he might have been pulling our legs... But if you want the real answer here goes nothing! She's an angel of some sort who can take any form she wants at will as long as there are no humans around because angels don't like them or something along those lines. So basically in other words she could be anyone she wanted with whatever powers they had on Earth without having to worry about being recognized by someone else. That makes sense right?! Anyway back to why I brought up all these things.... Because now for reasons unknown to us both (and probably most people) she has taken over his body which explains how everything happened today."<|endoftext|>
from text-generation-webui.
It should now be possible to reproduce this ok-ish behavior across multiple GPUs with
python server.py --auto-devices
I will close the issue. In case of errors, feel free to reopen it.
from text-generation-webui.
Related Issues (20)
- hi
- In-progress chat session resets / disappears when returning to chat after a time away HOT 11
- Specify a grammar via settings.yaml or other config file
- error training HOT 1
- FileNotFoundError: [WinError 3] Системе не удается найти указанный путь: 'logs\\chat\\Вика Бокс ' HOT 4
- Select Format of Lora Output
- Please allow keyboard shortcut remapping HOT 1
- Official linked XTTS_v2 google colab throws tokenizers version error HOT 1
- Official linked XTTS_v2 Google Colab throws: ibcublas.so.11 is not found
- start_macos.sh fails on unabling to find torch version
- error using model_id="liuhaotian/llava-v1.6-34b" in HuggingFacePipeline
- No instruction and chat template for CALM
- Trying to get logits with samplers causes a deadlock HOT 2
- Why do you talk nonsense? HOT 2
- Add support for Nvidia Optimum
- RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 HOT 3
- Translator doesn't work in the API (UPD: I found a solution) HOT 1
- RuntimeError: cannot release un-acquired lock HOT 6
- Use HuggingFace's Quanto library KV Cache Quantization for any Transformers-based loader
- Access violation trying to load after updating with wizard HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from text-generation-webui.