Comments (4)
There are several trade-offs to consider, so for a full answer let me first recommend this excellent survey on parallelism in deep learning: https://arxiv.org/abs/1802.09941
From a library perspective, it's difficult to provide general model parallelism because it is specific to the user model. Model parallelism certainly has uses such as being more memory scalable than data parallelism such as batch splitting.
ZeRO is a set of complementary optimizations that improve scalability without users having to implement model parallelism. The key idea is that users still provide a model without designing for parallelism and DeepSpeed can facilitate data parallelism and ZeRO to scale to large model and large degrees of parallelism. DeepSpeed has scaled to models with 6 billion parameters using only data parallelism and ZeRO on V100 GPUs. Adding model parallelism via Megatron-LM got DeepSpeed to 100B parameters.
I'd like to note that we are of course not anti-model parallelism. DeepSpeed is meant to work with model parallelism if the user has a model-parallel program. The Megatron tutorial touches on this in more depth.
from deepspeed.
Hi there! DeepSpeed does not implement model parallelism, but it does support models that use it. It's up to the user to implement model parallelism (e.g., a user might use some dist.XXX()
communication routines to coordinate forward/backward passes). DeepSpeed just needs an mpu
object at initialization to query things process ranks and groups (and world_size) during training.
The difficulty of model parallelism was one major motivation for ZeRO. If you enable ZeRO, you can avoid the need for model parallelism in many cases. For an example, the Megatron-LM tutorial combines Megatron's model parallelism with ZeRO.
from deepspeed.
@ShadenSmith Got it. Is it because the model parallelism is not efficient or scalable that you study memory optimization, the ZeRO?
I'm new to this field, there is too little material about model parallelism. Can I ask why it is hard to do distributed model parallelism, in which way, inter-machine communication, network splitting, or algorithm level.
And thank you.
from deepspeed.
The Megatron tutorial touches on this in more depth.
The link to The Megatron tutorial is 404, here is a steady link:
https://github.com/microsoft/DeepSpeed/blob/46d2e2872b64ebccb8bf4eb5c8a3a55f9adaaa6c/docs/_tutorials/megatron.md
from deepspeed.
Related Issues (20)
- [BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
- RuntimeError: Error building extension 'cpu_adam', because /usr/bin/ld: can not find -lcurand,help! HOT 1
- Fail to use zero_init to construct llama2 with deepspeed zero3 and bnb!
- does DeepSpeed support AMSP (a new DP shard strategy)
- [BUG] 'Invalidate trace cache' with Seq2SeqTrainer+predict_with_generate+Zero3
- AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. HOT 6
- How to set different learning rates for different parameters of LLMs
- Getting parameters of embeddings (safe_get_local_fp32_param)and setting the weight of embeddings (safe_set_local_fp32_param) does not work (bug?). HOT 1
- [BUG] DeepSpeed on pypi not compatible with latest `numpy` HOT 5
- [BUG] GPU memory leaking after deleting deepspeed engine HOT 2
- [BUG] Using and Building DeepSpeedCPUAdam HOT 23
- Bug Report: Issues Building DeepSpeed on Windows HOT 4
- [BUG] Logs full of FutureWarning when training with nightly PyTorch HOT 1
- [BUG] inference ValueError
- Running out of CPU memory. Dataset is loaded for each created process
- [ERROR] [launch.py:321:sigkill_handler] exits with return code = -11
- [BUG] Regression: 0.14.3 causes grad_norm to be zero HOT 2
- [BUG] Deepspeed does not seem to work using GPUDirect, should it? HOT 3
- I am not able to reproduce it. May be I am missing some details. Can you please give me the exact steps & set up you followed?
- Tensor(hidden states)missing across GPU in Pipeline Parallelism Training[BUG]
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.