Comments (6)
Hi @Tianwei-She thanks for using MII. It looks like you're seeing this problem because we try to load the model in fp32 onto each GPU before converting it to fp16 here.
In general, MII is not the most efficient with GPU memory when running multi-GPU, because:
- we load the model onto each GPU before distributing it with DeepSpeed-Inference
- we always load the model with the default dtype (which is typically fp32)
Here's a PR to address some of these inefficiencies by loading with the user-specified dtype and allowing the user to use system memory to load the model before distributing the model across GPUs. Please give #105 a try and let me know if that fixes your problem:
pip install git+https://github.com/microsoft/deepspeed-mii@mrwyattii/address-poor-vram-usage
The script you shared should work with these changes, but if it doesn't try adding "load_with_sys_mem": True
to your mii_configs
.
Note: Unfortunately, we will still need to load the entire model tensor_parallel
times (either one copy on each GPU, or all on system memory). We are working on addressing this issue, but I don't have a fix right now.
from deepspeed-mii.
Closing due to inactivity and #105 has been merged, please reopen if you are seeing the same error with the latest DeepSpeed-MII.
from deepspeed-mii.
@mrwyattii Same error in V0.0.4
Config:
mii_config = {"tensor_parallel": 1, 'dtype': 'fp16', "load_with_sys_mem": True} name = "facebook/opt-30b" ds_config = { "fp16": { "enabled": True }, "bf16": { "enabled": False }, "zero_optimization": { "stage": 3, "offload_param": { "device": "cpu", "pin_memory": False, }, }, "train_micro_batch_size_per_gpu": 1, }
from deepspeed-mii.
@wangshankun what kind of GPU are you trying to run on? The OPT-30b model is ~60GB in size. From the screenshot you shared, it looks like you only have 22GB of GPU memory available and will not be able to run a model this large:
from deepspeed-mii.
@mrwyattii
Did I misunderstand the meaning of CPU offload in Zero3?
It is precisely because the GPU memory is insufficient that I want to place the model in the host, which is why I configured CPU offload and load_with_sys_mem.
from deepspeed-mii.
Same question...Does Zero offload actually work in mii, I have been having a lot of difficulties trying to get DeepSpeed-MII to do any soft of cpu or nvme offload.
from deepspeed-mii.
Related Issues (20)
- Base model support HOT 1
- Support for Codellama Model in deepspeed-fastgen HOT 2
- Invalid parameter bricks service HOT 1
- Deadlock detected HOT 12
- AttributeError: 'NoneType' object has no attribute 'value' HOT 2
- RuntimeError: There is no current event loop in thread 'Thread-1'. HOT 2
- support chatglm3 HOT 1
- Loading Bloom-3b in persisten mode fails HOT 1
- restful api host need configuration HOT 1
- Can deepspeed-MII run on AMD GPU? HOT 2
- The choice of the split size for splitAndFuse HOT 4
- How to stream tokens? HOT 1
- Error messages spilled from persistent deployment for every request HOT 6
- Can you support DeepSeek's inference acceleration? Thank you very much. HOT 4
- mixtral support HOT 2
- Deployment in kubernetes
- Reproduced readme results HOT 8
- Can MII support quanted Llama2 of AWQ?
- Problem while running facebook/opt-125m with MII HOT 2
- When running mii.serv, it keeps print waiting for server to start. HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed-mii.