Comments (8)
This is correct, it is the responsibility of the custom loader to load the appropriate dataset portion based on local rank (or whatever else the client finds appropriate) into GPU memory. DeepSpeed does not impose any restrictions on the custom dataloader, and does not perform any sanity checks.
Hope that clears things up.
from deepspeed.
-
Passing training_data to deepspeed.initialize is optional, and not required. Some models can benefit from deepspeed's i/o optimizations. However, using torch trainloader is fine.
-
In our experience, deepspeed works well with custom trainloaders and datasets. So I suspect the answer to that is yes, but please share any issues you run into.
-
The train_batch_size in json file is used to perform gradient accumulation and compute progress statistics, so a mismatch could result in incorrect training and confusing statistics.
-
Yes, this is supported.
from deepspeed.
@tjruwase could you clarify - how does the parallelization work with a custom dataloader? Do you need to make sure the dataloader uses the local rank as input to load a separate portion of the dataset manually?
from deepspeed.
@tjruwase Thanks for the clarification.
Just one more question.
Assuming I have a custom data generator like that:
for batch_idx, batch in enumerate(DatasetGenerator):
data, target,src_padding = batch['input'].to(model_engine.local_rank), batch['target'].to(model_engine.local_rank), batch['padding_mask'].to(model_engine.local_rank)
Does this mean:
- The batch size should be equal to train_micro_batch_size_per_gpu ?
- It should provide different/random batch for each gpu/node ?
from deepspeed.
-
Yes, batch_size should be equal to train_micro_batch_size_per_gpu, which is batch size for a single step on one gpu.
-
Assuming DatasetGenerator is returning the correct batch for each gpu, then this would be correct since the .to() is simply moving the data bits into gpu memory.
from deepspeed.
@tjruwase Thanks a lot that answers all my questions for now :)
from deepspeed.
@tjruwase assuming I'm using some customer data loader as @agemagician above but in a multi-node, multi-gpu settings, how would I go about and send tensors to the right GPU? Do I still do tensor.to(engine.local_rank)
or global_rank
please? Thanks.
from deepspeed.
@tnq177, I believe you want local_rank
as crossing node boundaries requires communication collectives, like broadcast
.
from deepspeed.
Related Issues (20)
- nv-ds-chat CI test failure
- [HELP] ZeRO3 partition parameters after fully load to each GPU! HOT 7
- [BUG] ZeRO optimizer with MoE Expert Parallelism HOT 1
- [BUG] Pipeline Dataloader Samler: `shuffle=False`
- [REQUEST] Moving a trainable model with an optimiser between GPU and CPU
- [BUG] RuntimeError: Error building extension 'fused_adam' Loading extension module fused_adam
- [BUG] 1: error: must run as root and 2: raise RuntimeError("Ninja is required to load C++ extensions")
- [BUG] RuntimeError encountered when generating tokens from a DeepSpeedHybridEngine initialized with 4-bit quantization. HOT 2
- [BUG] is_zero_init_model is always False when I'm using zero_init! HOT 4
- [BUG] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 1
- Deepspeed stage 3 hanging after 1st validation sample
- [BUG] 4-bit quantized models would repeatedly generate the same tokens when bf16.enabled is true HOT 1
- Deepspeed zero3 + qlora arise problem! Params didn't sharded first before load to each GPU!
- Install errors on Windows HOT 5
- [HELP] How to safely switch trainable parameters in ZeRO-3 stage? HOT 2
- Does deepspeed support aarch64? HOT 6
- [BUG] tortoise_tts.py fails on deepspeed/pydantic error HOT 1
- [BUG] 1 line logic issue: flipped sign/direction in `_partition_param_sec` of `partition_parameters.py`? HOT 1
- [BUG] RuntimeError encountered when generating tokens from a Meta-Llama-3-8B-Instruct model initialized with 4-bit or 8-bit quantization HOT 2
- Why doesn't deepspeed stage 3 allow a batch size of 1 with multiple GPUs?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.