Comments (3)
Hi @Aaron-mindverse it's not clear why your provider is hanging with multi-GPU. If you copy and paste the code (or provide a link to a branch with the modifications) I can test it on my side. But I would first recommend taking a look at our BloomPipeline
and load_hf_llm
:
DeepSpeed-MII/mii/models/providers/llm.py
Line 25 in 747072b
Use this as a template for creating your OPTProvider
and it should work with multi-GPU. Thanks!
from deepspeed-mii.
Hi @Aaron-mindverse it's not clear why your provider is hanging with multi-GPU. If you copy and paste the code (or provide a link to a branch with the modifications) I can test it on my side. But I would first recommend taking a look at our
BloomPipeline
andload_hf_llm
:DeepSpeed-MII/mii/models/providers/llm.py
Line 25 in 747072b
Use this as a template for creating your
OPTProvider
and it should work with multi-GPU. Thanks!
Thank you so much, this worked for me, although I had to modify a lot of DeepSpeed code specifically tailored for the Bloom model, such as 'get_transformer_name'
from deepspeed-mii.
@Aaron-mindverse I don't think you should need to modify DS to get the OPT models running. In general, the OPT models will work with the HuggingFace provider. Could you share the modified code and I can help debug why you're running into Bloom-specific code paths on the DeepSpeed side?
from deepspeed-mii.
Related Issues (20)
- how can I use deepspeed to split the model to submit GPU?
- Is openai compatible server still working? HOT 1
- How do I launch the api on a graphics card other than cuda: 0 HOT 1
- How is the prompt segmentation specifically implemented for Dynamic SplitFuse? Is there any code implement or code snippet ?
- [FEATURE] Access to logits and final hidden layer HOT 1
- RuntimeError: The server socket has failed to listen on any local network address HOT 1
- Only running one replica even though setting many replicas
- [Problem]errno: 98 - Address already in use
- Performance with vllm
- error when using Qwen1.5-32B
- ValueError: Unsupported model type phi3
- BUG in run_batch_processing
- Cannot run Yi-34B-Chat => ValueError: Unsupported q_ratio: 7 HOT 2
- [REQUEST] Mixtral-8x22B support
- [REQUEST] LLAMA-3 support
- Does deepspeed-mii support prefix_allowed_tokens_fn?
- DeepSpeed-MII 能加载量化的int4或者int8的模型吗?
- Tf32 support
- How can I use the same prompt to produce the same text output as vllm
- Support LLava next stronger
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed-mii.