Comments (4)
Hi all, sorry for the late response.
this initialization issue has been fixed in my case as I commented. I set dist_init_required=True
when initializing the first model engine, and set dist_init_required=False
for the rest. Now it works like a charm.
I modified my code as below and the issue does not appear now, but I wonder this is fine.
training_data = load_dataset() encoder_params = filter(lambda p: p.requires_grad, encoder.parameters()) decoder_params = filter(lambda p: p.requires_grad, decoder.parameters()) self.encoder, self.encoder_optim, train_loader, _ = deepspeed.initialize(opt, model=encoder, model_parameters=encoder_params, training_data=training_data, dist_init_required=True) self.decoder, self.decoder_optim, _, _ = deepspeed.initialize(opt, model=decoder, model_parameters=decoder_params, dist_init_required=False)
when I set dist_init_requires=False
in all deepspeed.initialize() calls, raise RuntimeError("trying to initialize the default process group "
occurs since I conjecture there is no any other deepspeed.initialize() calls in my codes.
from deepspeed.
I modified my code as below and the issue does not appear now, but I wonder this is fine.
training_data = load_dataset()
encoder_params = filter(lambda p: p.requires_grad, encoder.parameters())
decoder_params = filter(lambda p: p.requires_grad, decoder.parameters())
self.encoder, self.encoder_optim, train_loader, _ = deepspeed.initialize(opt, model=encoder, model_parameters=encoder_params, training_data=training_data, dist_init_required=True)
self.decoder, self.decoder_optim, _, _ = deepspeed.initialize(opt, model=decoder, model_parameters=decoder_params, dist_init_required=False)
Also, following this tutorial, I changed my training code as below, which also throws an error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
loss = calculate_loss(output, target)
self.encoder.backward(loss)
self.decoder.backward(loss)
self.encoder.step()
self.decoder.step()
I couldn't find a way to set an option like retrain_graph=True
, so now I changed my code as below and do not see any errors, but still wonder if this is the right way to use deepspeed.
self.encoder.zero_grad()
self.decoder.zero_grad()
loss = calculate_loss(output, target)
loss.backward()
self.encoder.step()
self.decoder.step()
So I would greatly appreciate if you could provide us a tutorial when there are multiple models to be trained. Thanks!
from deepspeed.
raise RuntimeError("trying to initialize the default process group "
RuntimeError: trying to initialize the default process group twice!
This error message suggests to me that nccl initialization had occurred earlier in your code before the first call to deepspeed.initialize(). Can you confirm that? If this is the case, can you try setting dist_init_required=False in all the deepspeed.initialize() calls?
from deepspeed.
Hi @1Konny, have you tried anything further with this issue?
from deepspeed.
Related Issues (20)
- [BUG] Running llama2-7b step3 with tensor parallel and HE fails due to incompatible shapes
- RuntimeError: Error building extension 'cpu_adam', because /usr/bin/ld: can not find -lcurand,help! HOT 1
- Fail to use zero_init to construct llama2 with deepspeed zero3 and bnb!
- does DeepSpeed support AMSP (a new DP shard strategy)
- [BUG] 'Invalidate trace cache' with Seq2SeqTrainer+predict_with_generate+Zero3
- AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. HOT 6
- How to set different learning rates for different parameters of LLMs
- Getting parameters of embeddings (safe_get_local_fp32_param)and setting the weight of embeddings (safe_set_local_fp32_param) does not work (bug?). HOT 1
- [BUG] DeepSpeed on pypi not compatible with latest `numpy` HOT 5
- [BUG] GPU memory leaking after deleting deepspeed engine HOT 2
- [BUG] Using and Building DeepSpeedCPUAdam HOT 23
- Bug Report: Issues Building DeepSpeed on Windows HOT 4
- [BUG] Logs full of FutureWarning when training with nightly PyTorch HOT 1
- [BUG] inference ValueError
- Running out of CPU memory. Dataset is loaded for each created process
- [ERROR] [launch.py:321:sigkill_handler] exits with return code = -11
- [BUG] Regression: 0.14.3 causes grad_norm to be zero HOT 2
- [BUG] Deepspeed does not seem to work using GPUDirect, should it? HOT 3
- I am not able to reproduce it. May be I am missing some details. Can you please give me the exact steps & set up you followed?
- Tensor(hidden states)missing across GPU in Pipeline Parallelism Training[BUG]
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.