GithubHelp home page GithubHelp logo

Comments (9)

philschmid avatar philschmid commented on July 4, 2024

@ChaiBapchya could you share the launcher and the script you used to execute distributed training using horovod? The mentioned scripts/launcher is using SageMaker Data Parallelism, this would make it easier to reproduce the error.

from notebooks.

ChaiBapchya avatar ChaiBapchya commented on July 4, 2024

Very minute difference between SMDDP and Horovod [as you know the APIs for SMDDP are made similar to Hvd for ease-of-use].
Here's the 2 scripts you'd need to run the training
https://gist.github.com/ChaiBapchya/03c88c70bd8e003585e7edde436b403d
Run

python launcher.py

in a machine that is able to create SM training jobs.

from notebooks.

philschmid avatar philschmid commented on July 4, 2024

hey @ChaiBapchya,

are you sure these scripts are right?
your imported

    import horovod.tensorflow as hvd

but hvd is never used.

Additionally not sure if this will work since the script is using keras and tf

from notebooks.

ChaiBapchya avatar ChaiBapchya commented on July 4, 2024
  • Oops. thanks for catching it. fixed that import to sdp. Missed it while pasting scripts local to gist.

not sure if this will work since the script is using keras and tf

Well the keras APIs are essentially only used for loss/optimizer while hvd/smddp TF APIs are used for instrumenting distributed training. That's how the original script is used.

I was able to get bert_base_uncased and distilbert_base_uncased working for both horovod & smddp with this same script]

from notebooks.

philschmid avatar philschmid commented on July 4, 2024

Hey @ChaiBapchya,

I have created another script https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed with a version for single-gpu and multi-gpu.
The results of the test can you see below.

model type batch_size worked
bert-base-uncased horovod 16 🛑
bert-base-uncased horovod 8
bert-large-uncased-whole-word-masking horovod 8 🛑
bert-large-uncased-whole-word-masking horovod 6
bert-base-uncased single 16
bert-base-uncased single 8
bert-large-uncased-whole-word-masking single 16 🛑
bert-large-uncased-whole-word-masking single 8

For me, this doesn't seem to be a transformers issue and is more related to tensorflow and horovod. Can you ask your internal team for more insights about why horovod is taking so much extra space?

from notebooks.

ChaiBapchya avatar ChaiBapchya commented on July 4, 2024

Experiment: test with vanilla tf 2.4.1
Result - OOM
Explanation:
Uninstalling the tensorflow in DLC [aws-tf dlc] with the vanilla/stock tensorflow-gpu package [same version 2.4.1] from pypi - results in similar OOM errors.

Summary - this OOM is not caused by aws-tf binary alone. The issue is probably intrinsic to vanilla tf2.4.1

from notebooks.

ChaiBapchya avatar ChaiBapchya commented on July 4, 2024

As far as your scripts [https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed] are concerned - they don't use smdp/horovod APIs right?
What else is the diff between single node & multi-node scripts of yours? What's the purpose of removing those APIs and testing on single-node?

from notebooks.

philschmid avatar philschmid commented on July 4, 2024

It uses horovord.keras https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed/blob/752e9d545dfb0dbe2920f03c8d75ce6b6571894c/scripts/train.py#L21, instead of TensorFlow or SMDP. Since I don't have yet access to Keras SMDP.

I tested multi- against single-node to verify that it is working and single GPU and to have comparison values for performance and efficency

from notebooks.

philschmid avatar philschmid commented on July 4, 2024

Hey @ChaiBapchya,

I have created another script https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed with a version for single-gpu and multi-gpu.
The results of the test can you see below.

model type batch_size worked
bert-base-uncased horovod 16 🛑
bert-base-uncased horovod 8 ✅
bert-large-uncased-whole-word-masking horovod 8 🛑
bert-large-uncased-whole-word-masking horovod 6 ✅
bert-base-uncased single 16 ✅
bert-base-uncased single 8 ✅
bert-large-uncased-whole-word-masking single 16 🛑
bert-large-uncased-whole-word-masking single 8 ✅
For me, this doesn't seem to be a transformers issue and is more related to tensorflow and horovod. Can you ask your internal team for more insights about why horovod is taking so much extra space?

from notebooks.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.