Comments (7)
This feature is definitely on our radar, I don’t have a concrete timeline though. But we’ll be sure to update this thread when we have more info.
from deepspeed.
@Michiel29 as Jeff mentioned, this is something we definitely want, but currently its not on the top of our priority list at the moment. We be more than happy to accept contributions on this from the community. If you would like to contribute, we can definitely share our discussions on what would be needed to support Lamb with ZeRO. Please let us know.
from deepspeed.
@jeffra I'd like to use lamb with zero too. Can you share updates on using lamb + zero?
from deepspeed.
@Michiel29 as Jeff mentioned, this is something we definitely want, but currently its not on the top of our priority list at the moment. We be more than happy to accept contributions on this from the community. If you would like to contribute, we can definitely share our discussions on what would be needed to support Lamb with ZeRO. Please let us know.
Why Lamb is not in a list of supported optimizers?
Please share your ideas of what would be needed to support Lamb with ZeRO.
from deepspeed.
I realized that we have to add two updates to use LAMB with ZeRO:
- Split model parameters preserving whole tensors on a single node (here)
- Preserve splitting of a parameter vector into tensors at optimizer initialization stage (here)
Am I right?
from deepspeed.
I realized that we have to add two updates to use LAMB with ZeRO:
- Split model parameters preserving whole tensors on a single node (here)
- Preserve splitting of a parameter vector into tensors at optimizer initialization stage (here)
Am I right?
Hi,I am not very familiar with lamb, can you explain in detail your thoughts on lamb combined with zero ?thanks alot
from deepspeed.
@Michiel29 as Jeff mentioned, this is something we definitely want, but currently its not on the top of our priority list at the moment. We be more than happy to accept contributions on this from the community. If you would like to contribute, we can definitely share our discussions on what would be needed to support Lamb with ZeRO. Please let us know.
Hi, I'd like to use lamb with zero stage1. could you share your ideas of what would be needed to support Lamb with ZeRO.
Do we need to modify the implementation of fused lamb or modify the implementation of zero ?
If I know the detailed idea, I can contribute the code. Looking forward to your reply
from deepspeed.
Related Issues (20)
- [BUG] Zero3: Post backward hook is not triggered for submodules whose inputs have .required_grad=False HOT 1
- Get a error when use deepspeed training with torch.compile HOT 1
- [BUG]AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'
- [BUG] fp_quantizer is not correctly built when non-jit installation HOT 2
- [BUG]CUDA error in pipeline parallel HOT 3
- [BUG] FlopsProfiler upsample flops compute bug
- [BUG] Version >0.14.0 leads to `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!` HOT 15
- [BUG] Zero3: Gather the params for inference(huggingface_language_model.generate) in the end of 1 epoch and re-partition it for next epoch training HOT 9
- [REQUEST] How to install only the torch CPU version when I execute `pip install deepspeed`.
- [REQUEST] DeepSpeed-Ulysses with the Pure Deepspeed Zero
- [BUG] Error with nn.transformers layer size with Zero stage 3 HOT 1
- [Question]how to run the mixtral inference in multi-node?
- [BUG] deepspeed overlap_comm data race HOT 1
- [BUG] Code blocking when training on multi-nodes using DS-Chat.
- [BUG] Cannot replace pytorch.checkpoint with deepspeed.runtime.activation_checkpointing.checkpointing in accelerate
- Failed to install Fused_adam op on CPU HOT 9
- [BUG] when use 'DS_BUILD_FUSED_ADAM=1 pip3 install deepspeed',it cant install fused_adam HOT 3
- [BUG] A small misspelling bug in the functino set_none_gradients_to_zero of optimizer
- nv-nightly CI test failure
- [BUG] The specified pointer resides on host memory and is not registered with any CUDA device. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.