Comments (8)
@hfwen0502 Did you try to use PyTorchJob Elastic Policy to configure your Job ?
Also, check the PyTorch example with torchrun: https://github.com/kubeflow/training-operator/blob/master/examples/pytorch/cpu-demo/demo.yaml.
Maybe if you run PyTorch in Elastic Mode with torchrun it can successfully restart your failed Worker and resume Training ?
cc @kuizhiqing @johnugeorge @tenzen-y For more info here.
from training-operator.
To be honest, in our production environment to support job with 10k+ GPUs, the un-touched torchrun rendezvous mechanism is not efficient and stable enough (personal point of view, plz tell me if you have good practice for that).
One more thing to indicate would be that RestartPollicy=OnFailure for pod means restart internally with the same node, while I would recommend to delete the failed pod then rescheduler the pod with another node is better, since the physical failure is more frequent than the case which can be recovered by simply restart process.
from training-operator.
@hfwen0502 Did you try to use PyTorchJob Elastic Policy to configure your Job ? Also, check the PyTorch example with torchrun: https://github.com/kubeflow/training-operator/blob/master/examples/pytorch/cpu-demo/demo.yaml. Maybe if you run PyTorch in Elastic Mode with torchrun it can successfully restart your failed Worker and resume Training ?
cc @kuizhiqing @johnugeorge @tenzen-y For more info here.
Given that the number of GPUs required in our training jobs typically remains constant within our use cases, we will not be able to take advantage of the elastic policy in this scenario.
from training-operator.
To be honest, in our production environment to support job with 10k+ GPUs, the un-touched torchrun rendezvous mechanism is not efficient and stable enough (personal point of view, plz tell me if you have good practice for that).
One more thing to indicate would be that RestartPollicy=OnFailure for pod means restart internally with the same node, while I would recommend to delete the failed pod then rescheduler the pod with another node is better, since the physical failure is more frequent than the case which can be recovered by simply restart process.
Currently, the torchrun rendezvous protocol has been working fine within the current scale of our training jobs. Will keep that in mind when scaling to 10K+ GPU use cases.
Agree with the OnFailure comment. This is actually what I am leading to, whether there can be a new restart policy that will restart the entire training job in an automated fashion. Consider GPU failures occurring in the middle of the night, causing the entire job to become stuck, resulting in significant financial waste with over 1,000 GPUs idling.
from training-operator.
What Andrey suggest is to use torchrun with elastic mode which can enable fault tolerant in pod level, like nnodes=100:100
.
The struggling part comes from that the communication library NCCL do not support fault tolerant. It force all worker process should be restart to rebuild communicator after one peer failure. In torch context, the agent is responsible for restart worker processes which may not efficient and stable in large scale. The good news is we can implement our custom Agent to improve it.
from training-operator.
This is actually what I am leading to, whether there can be a new restart policy that will restart the entire training job in an automated fashion
I think, that is valid idea. So in such restart policy our controller should re-create all PyTorchJob's pods in case of single Pod failure. What do you think about it @kuizhiqing @tenzen-y @johnugeorge ?
from training-operator.
To be honest, in our production environment to support job with 10k+ GPUs, the un-touched torchrun rendezvous mechanism is not efficient and stable enough (personal point of view, plz tell me if you have good practice for that).
One more thing to indicate would be that RestartPollicy=OnFailure for pod means restart internally with the same node, while I would recommend to delete the failed pod then rescheduler the pod with another node is better, since the physical failure is more frequent than the case which can be recovered by simply restart process.
@kuizhiqing Is this a bug? https://github.com/kubeflow/training-operator/blob/master/pkg/controller.v1/common/pod.go#L355 Pod is deleted when restartPolicy is RestartPolicyOnFailure
from training-operator.
This is actually what I am leading to, whether there can be a new restart policy that will restart the entire training job in an automated fashion
I think, that is valid idea. So in such restart policy our controller should re-create all PyTorchJob's pods in case of single Pod failure. What do you think about it @kuizhiqing @tenzen-y @johnugeorge ?
@andreyvelich @johnugeorge Indeed, We (WG Batch) are working on resolving these barriers by custom restarting policies triggered by configurable failure and success policies at sig-apps. These features are called JobPodFailurePolicy
and [JobSuccessPolicy
](KEP is approved, this feature will be published in 1.30 as an alpha).
In this case, the JobPodFailurePolicy
could resolve issues since the JobPodFailurePolicy
can be configured to determine which errors we should handle actual errors and which actions we should take against each error, like Failure Job or Ignore the Failure.
This is similar to PyTorchJob's restartPolicy with exit-code, but it is more extensible and generic.
Unfortunately, the current training-operator can not support the kubernetes upstream PodFaiurePolicy and JobSuccessPolicy features since it directly manages raw pods and services.
So, I believe that we can remove these barriers once we complete these tasks: #1718
from training-operator.
Related Issues (20)
- PytorchJob restartPolicy: ExitCode leaves job in a "Restarting" state after a non-retryable error HOT 7
- PytorchJob restartPolicy: ExitCode does not honor backoffLimit for retryable errors HOT 5
- Support MLX on Kubernetes with Kubeflow HOT 2
- Migrate to controller-runtime logger HOT 3
- Support CertManager for the Webhook cert generation HOT 1
- Unable to start elastic PyTorchJob example HOT 5
- Commonize webhook validations at the some points
- Update developer documentation for arm HOT 1
- Aunpun1.00 HOT 1
- Update pytorch launcher component in Kubeflow Pipelines repository HOT 2
- Update developer guide to handle missing training-operator-webhook-cert HOT 2
- Job Status is failed, when scale-in ps. HOT 4
- Failed K8s nodes leave jobs hanging indefinitely HOT 3
- Update examples for `train` API HOT 1
- [Question] Training Operator v1.8 Release Date HOT 1
- Why manifests/base/service.yaml does not include webhook server port (443) in version 1.7.0~1.5.0? HOT 7
- Not getting Kubeflow Training SDK v1.7 when installing `kubeflow-training` HOT 13
- Flaky Test: [It] should create desired Pods and Services: Distributed TFJob (4 workers, 2 PS) is succeeded
- MPIJob requires service names for the pods. HOT 3
- Add DeepSpeed Example with MPI Operator HOT 9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from training-operator.