GithubHelp home page GithubHelp logo

Comments (8)

andreyvelich avatar andreyvelich commented on June 13, 2024

@hfwen0502 Did you try to use PyTorchJob Elastic Policy to configure your Job ?
Also, check the PyTorch example with torchrun: https://github.com/kubeflow/training-operator/blob/master/examples/pytorch/cpu-demo/demo.yaml.
Maybe if you run PyTorch in Elastic Mode with torchrun it can successfully restart your failed Worker and resume Training ?

cc @kuizhiqing @johnugeorge @tenzen-y For more info here.

from training-operator.

kuizhiqing avatar kuizhiqing commented on June 13, 2024

To be honest, in our production environment to support job with 10k+ GPUs, the un-touched torchrun rendezvous mechanism is not efficient and stable enough (personal point of view, plz tell me if you have good practice for that).

One more thing to indicate would be that RestartPollicy=OnFailure for pod means restart internally with the same node, while I would recommend to delete the failed pod then rescheduler the pod with another node is better, since the physical failure is more frequent than the case which can be recovered by simply restart process.

from training-operator.

hfwen0502 avatar hfwen0502 commented on June 13, 2024

@hfwen0502 Did you try to use PyTorchJob Elastic Policy to configure your Job ? Also, check the PyTorch example with torchrun: https://github.com/kubeflow/training-operator/blob/master/examples/pytorch/cpu-demo/demo.yaml. Maybe if you run PyTorch in Elastic Mode with torchrun it can successfully restart your failed Worker and resume Training ?

cc @kuizhiqing @johnugeorge @tenzen-y For more info here.

Given that the number of GPUs required in our training jobs typically remains constant within our use cases, we will not be able to take advantage of the elastic policy in this scenario.

from training-operator.

hfwen0502 avatar hfwen0502 commented on June 13, 2024

To be honest, in our production environment to support job with 10k+ GPUs, the un-touched torchrun rendezvous mechanism is not efficient and stable enough (personal point of view, plz tell me if you have good practice for that).

One more thing to indicate would be that RestartPollicy=OnFailure for pod means restart internally with the same node, while I would recommend to delete the failed pod then rescheduler the pod with another node is better, since the physical failure is more frequent than the case which can be recovered by simply restart process.

Currently, the torchrun rendezvous protocol has been working fine within the current scale of our training jobs. Will keep that in mind when scaling to 10K+ GPU use cases.

Agree with the OnFailure comment. This is actually what I am leading to, whether there can be a new restart policy that will restart the entire training job in an automated fashion. Consider GPU failures occurring in the middle of the night, causing the entire job to become stuck, resulting in significant financial waste with over 1,000 GPUs idling.

from training-operator.

kuizhiqing avatar kuizhiqing commented on June 13, 2024

What Andrey suggest is to use torchrun with elastic mode which can enable fault tolerant in pod level, like nnodes=100:100.

The struggling part comes from that the communication library NCCL do not support fault tolerant. It force all worker process should be restart to rebuild communicator after one peer failure. In torch context, the agent is responsible for restart worker processes which may not efficient and stable in large scale. The good news is we can implement our custom Agent to improve it.

from training-operator.

andreyvelich avatar andreyvelich commented on June 13, 2024

This is actually what I am leading to, whether there can be a new restart policy that will restart the entire training job in an automated fashion

I think, that is valid idea. So in such restart policy our controller should re-create all PyTorchJob's pods in case of single Pod failure. What do you think about it @kuizhiqing @tenzen-y @johnugeorge ?

from training-operator.

johnugeorge avatar johnugeorge commented on June 13, 2024

To be honest, in our production environment to support job with 10k+ GPUs, the un-touched torchrun rendezvous mechanism is not efficient and stable enough (personal point of view, plz tell me if you have good practice for that).

One more thing to indicate would be that RestartPollicy=OnFailure for pod means restart internally with the same node, while I would recommend to delete the failed pod then rescheduler the pod with another node is better, since the physical failure is more frequent than the case which can be recovered by simply restart process.

@kuizhiqing Is this a bug? https://github.com/kubeflow/training-operator/blob/master/pkg/controller.v1/common/pod.go#L355 Pod is deleted when restartPolicy is RestartPolicyOnFailure

from training-operator.

tenzen-y avatar tenzen-y commented on June 13, 2024

This is actually what I am leading to, whether there can be a new restart policy that will restart the entire training job in an automated fashion

I think, that is valid idea. So in such restart policy our controller should re-create all PyTorchJob's pods in case of single Pod failure. What do you think about it @kuizhiqing @tenzen-y @johnugeorge ?

@andreyvelich @johnugeorge Indeed, We (WG Batch) are working on resolving these barriers by custom restarting policies triggered by configurable failure and success policies at sig-apps. These features are called JobPodFailurePolicy and [JobSuccessPolicy](KEP is approved, this feature will be published in 1.30 as an alpha).

In this case, the JobPodFailurePolicy could resolve issues since the JobPodFailurePolicy can be configured to determine which errors we should handle actual errors and which actions we should take against each error, like Failure Job or Ignore the Failure.
This is similar to PyTorchJob's restartPolicy with exit-code, but it is more extensible and generic.

Unfortunately, the current training-operator can not support the kubernetes upstream PodFaiurePolicy and JobSuccessPolicy features since it directly manages raw pods and services.

So, I believe that we can remove these barriers once we complete these tasks: #1718

from training-operator.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.