Comments (3)
Hi @Maydaytyh
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
[...]
ValueError: The quantization method fp8 is not supported for the current GPU. Minimum capability: 90. Current capability: 86.
FP8 is only supported on >=sm90 i.e. Hopper cards. (Per fp8.py, support for sm89 (Ada, 4090) may come once vLLM upgrades to Pytorch 2.3.0)
AWQ and GPTQ quantization are much less hardware-specific, you might try using those.
from vllm.
I have same error.
ERROR 04-24 21:28:44 worker_base.py:157] KeyError: 'model.layers.55.mlp.down_proj.in_scale'
KeyError: 'model.layers.55.mlp.down_proj.in_scale'
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] Error executing method load_model. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] Traceback (most recent call last):
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] File "/home/d/anaconda3/envs/3.8/lib/python3.8/site-packages/vllm/worker/worker_base.py", line 149, in execute_method
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] return executor(*args, **kwargs)
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] File "/home/d/anaconda3/envs/3.8/lib/python3.8/site-packages/vllm/worker/worker.py", line 117, in load_model
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] self.model_runner.load_model()
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] File "/home/d/anaconda3/envs/3.8/lib/python3.8/site-packages/vllm/worker/model_runner.py", line 162, in load_model
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] self.model = get_model(
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] File "/home/d/anaconda3/envs/3.8/lib/python3.8/site-packages/vllm/model_executor/model_loader/init.py", line 19, in get_model
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] return loader.load_model(model_config=model_config,
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] File "/home/d/anaconda3/envs/3.8/lib/python3.8/site-packages/vllm/model_executor/model_loader/loader.py", line 224, in load_model
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] model.load_weights(
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] File "/home/d/anaconda3/envs/3.8/lib/python3.8/site-packages/vllm/model_executor/models/llama.py", line 411, in load_weights
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] param = params_dict[name]
(RayWorkerWrapper pid=3766121) ERROR 04-24 21:28:45 worker_base.py:157] KeyError: 'model.layers.55.mlp.down_proj.in_scale'
from vllm.
Yes this is intentional, at the moment FP8 will only be supported where we have native hardware support.
from vllm.
Related Issues (20)
- [Bug]: Issues with Applying LoRA in vllm on a T4 GPU
- [Bug]: Issues with Applying LoRA in vllm on a T4 GPU
- [Feature]: Speculative edits
- May I ask when the qwen moe quantization version is supported, preferably using auto gptq or awq.
- [Feature]: inconsistent vocab_sizes support for draft and target workers while using Speculative Decoding
- [Bug]: Different token return behaviors from vllm 0.3.0 → 0.4.3
- [Feature]: Option to override HuggingFace's configurations HOT 6
- [Bug]: Detokenize delay when update vllm from 0.3.0 to 0.4.2 HOT 1
- [Bug]: VLLM_ATTENTION_BACKEND set to ROCM_FLASH only in GHA environment, overriding automatic backend selection; this breaks other kernel unit tests. HOT 1
- [Feature]: Support for Mirostat, Dynamic Temperature, and Quadratic Sampling HOT 1
- [Usage]: how to terminal a vllm model and free or release gpu memory
- [Installation]: Failed to build punica HOT 1
- [Bug]: prompt_logprobs=0 raises AssertionError HOT 1
- [Bug]: Mistral 7B crashes on NVidia Tesla P100 with a CUDA Error HOT 1
- [Bug]: Mixtral-8x22 request cancelled by cancel scope when client sends multiple concurrent requests HOT 2
- [Doc]: Update the vllm distributed Inference and Serving with the new MultiprocessingGPUExecutor HOT 3
- [Usage]: RuntimeError: CUDA error: uncorrectable ECC error encountered HOT 1
- v0.5.0 Release Tracker HOT 2
- [Usage]: How to start inference serving through `LLM` object HOT 3
- [Feature]: Custom attention masks
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vllm.