Comments (4)
You can train a larger batch size in two ways:
-
Use gradient accumulation. If you want to train a batch size of 32 but can only fit batch size of 4, you can use a micro_batch_per_gpu of size 4 and gradient_accumulation_step of 8. Was the original code using gradient accumulation? Are the batch sizes consistent between deepscale config file and the data loader you are using?
-
Use activation check-pointing/re-materialization/re-computation. I assume this is what you mean by gradient check-pointing. With this you should be able to run a significantly larger batch size for the model you are describing.
What is the sequence length you are using, and what are your hidden dimensions?
Please take a look at the line 390-398 in the following file for example of activation checkpointing.
from deepspeed.
Thanks @samyam for the feedback.
In the original code, I was using both gradient accumulation and activation check-pointing/re-materialization/re-computation.
The total batch size was: 32 batch size * 8 gradient accumulation * 6 GPUs
Input length: 1024
embedding dimensions: 768
hidden dimensions: 3072
Vocab: 22
The gradient accumulation is not an issue between my code and deepspeed code.
The batch size is consistent between my custom data loader and deepspeed config file.
The problem is in the activation check-pointing/re-materialization/re-computation. I assumed if I disabled it and activate deepspeed, I can get the same results. Apparently, this is not the case.
I will re-integrate gradient checkpoint and see if there is a benefit of using deepspeed in this case or not.
from deepspeed.
Yes, DeepSpeed does not do activation check-pointing automatically. You have to add that in the model. Please keep us posted on if this solves your issue
from deepspeed.
Closing the issue. Please let us know if there are further questions or issues!
from deepspeed.
Related Issues (20)
- [BUG] oneapi/ccl.hpp: No such file or directory. HOT 1
- [BUG]模型卡在trainer.train()一直不训练
- [BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
- RuntimeError: Error building extension 'cpu_adam', because /usr/bin/ld: can not find -lcurand,help! HOT 1
- Fail to use zero_init to construct llama2 with deepspeed zero3 and bnb!
- does DeepSpeed support AMSP (a new DP shard strategy)
- [BUG] 'Invalidate trace cache' with Seq2SeqTrainer+predict_with_generate+Zero3
- AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. HOT 6
- How to set different learning rates for different parameters of LLMs
- Getting parameters of embeddings (safe_get_local_fp32_param)and setting the weight of embeddings (safe_set_local_fp32_param) does not work (bug?). HOT 1
- [BUG] DeepSpeed on pypi not compatible with latest `numpy` HOT 5
- [BUG] GPU memory leaking after deleting deepspeed engine HOT 2
- [BUG] Using and Building DeepSpeedCPUAdam HOT 23
- Bug Report: Issues Building DeepSpeed on Windows HOT 4
- [BUG] Logs full of FutureWarning when training with nightly PyTorch HOT 1
- [BUG] inference ValueError
- Running out of CPU memory. Dataset is loaded for each created process
- [ERROR] [launch.py:321:sigkill_handler] exits with return code = -11
- [BUG] Regression: 0.14.3 causes grad_norm to be zero HOT 2
- [BUG] Deepspeed does not seem to work using GPUDirect, should it? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.