Comments (5)
Hi, what is the status for this?
I want to compare ZeRo in FP32 to my system.
There are several use-cases for FP32 where it can use less memory compared to mixed precision since in mixed precision we still keep master parameters. (so for example given 44GB model in FP32, mixed precision will hold 66GB just for the model).
if the activations are small enough than mixed precision may not be good choice (at least memory wise).
from deepspeed.
@LvanderGoten, ZeRO should work with FP32.
Please disable the fp16 flag via
"fp16": {
"enabled": false
}
and also turn on the fp32_allreduce flag.
Since we implemented ZeRO with fp16 in mind, some of the variable names may not make sense for fp32, and there will be some redundant data copies and memory allocation, but you should still be able to see up to 2x memory reduction with ZeRO in fp32.
Please give it a try and let us know if you run into any issues.
from deepspeed.
I get the following:
Traceback (most recent call last):
File "identification/generative/CPGAN/train_deepspeed.py", line 186, in <module>
main()
File "identification/generative/CPGAN/train_deepspeed.py", line 179, in main
save_every_n_steps=args.save_every_n_steps)
File "/home/deepspeed/Code/identification/generative/CPGAN/orchestrator_msggan_deepspeed.py", line 250, in train
training_data=data)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/__init__.py", line 95, in initialize
collate_fn=collate_fn)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_light.py", line 126, in __init__
self._configure_with_arguments(args, mpu)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_light.py", line 324, in _configure_with_arguments
self._config = DeepSpeedConfig(args.deepspeed_config, mpu)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_config.py", line 243, in __init__
self._do_sanity_check()
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_config.py", line 359, in _do_sanity_check
self._do_error_check()
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_config.py", line 379, in _do_error_check
assert self.fp16_enabled, "DeepSpeedConfig: ZeRO is only supported if fp16 is enabled"
AssertionError: DeepSpeedConfig: ZeRO is only supported if fp16 is enabled
Using this config:
{
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"fp32_allreduce": true,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.0001
}
},
"fp16": {
"enabled": false
}
}
from deepspeed.
We haven't tested ZeRO with fp32 so there may be a few issues along the way here :) If you want to try it out I would comment out the these lines to disable this check: https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/pt/deepspeed_config.py#L378-L379
from deepspeed.
Feel free to re-open if you run into other issues or let us know if this solved your problem.
from deepspeed.
Related Issues (20)
- [BUG] oneapi/ccl.hpp: No such file or directory. HOT 1
- [BUG]模型卡在trainer.train()一直不训练
- [BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
- RuntimeError: Error building extension 'cpu_adam', because /usr/bin/ld: can not find -lcurand,help! HOT 1
- Fail to use zero_init to construct llama2 with deepspeed zero3 and bnb!
- does DeepSpeed support AMSP (a new DP shard strategy)
- [BUG] 'Invalidate trace cache' with Seq2SeqTrainer+predict_with_generate+Zero3
- AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. HOT 6
- How to set different learning rates for different parameters of LLMs
- Getting parameters of embeddings (safe_get_local_fp32_param)and setting the weight of embeddings (safe_set_local_fp32_param) does not work (bug?). HOT 1
- [BUG] DeepSpeed on pypi not compatible with latest `numpy` HOT 5
- [BUG] GPU memory leaking after deleting deepspeed engine HOT 2
- [BUG] Using and Building DeepSpeedCPUAdam HOT 23
- Bug Report: Issues Building DeepSpeed on Windows HOT 4
- [BUG] Logs full of FutureWarning when training with nightly PyTorch HOT 1
- [BUG] inference ValueError
- Running out of CPU memory. Dataset is loaded for each created process
- [ERROR] [launch.py:321:sigkill_handler] exits with return code = -11
- [BUG] Regression: 0.14.3 causes grad_norm to be zero HOT 2
- [BUG] Deepspeed does not seem to work using GPUDirect, should it? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.