Comments (4)
Hi
So I think it would be the responsibility of the "ops" person who deploys the TfJob operator to specify the location of the drivers on the host machine and appropriate mount points in the cluster.
This assumes that all GPU nodes in the cluster use the same driver version and have the drivers installed in the same location.
Supporting more multiple driver versions really depends on whether K8s eventually supports this.
For the TfJob operator the goal is really to just cut down on some of the boilerplate when specifying GPU jobs.
So with the current operator you can write a TfJob spec to use GPUs like so
- apiVersion: mlkube.io/v1beta1
kind: TfJob
spec:
replica_specs:
- replicas: 1
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- --gpu
env:
- name: LD_LIBRARY_PATH
value: /usr/local/cuda/lib64
image: gcr.io/project/tf_smoke_cmle-375-20:latest
name: tensorflow
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev/nvidia0
name: dev-nvidia
- mountPath: /dev/nvidiactl
name: dev-nvidiactl
- mountPath: /dev/nvidia-uvm
name: dev-nvidia-uvm
restartPolicy: OnFailure
volumes:
- hostPath:
path: /dev/nvidia0
name: dev-nvidia
- hostPath:
path: /dev/nvidiactl
name: dev-nvidiactl
- hostPath:
path: /dev/nvidia-uvm
name: dev-nvidia-uvm
tf_port: 2222
tf_replica_type: MASTER
Since the mount paths would be the same for all TfJobs there's no reason to make the user specify it when creating individual jobs. The user could just specify the following.
- apiVersion: mlkube.io/v1beta1
kind: TfJob
spec:
replica_specs:
- replicas: 1
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- --gpu
tf_port: 2222
tf_replica_type: MASTER
Gpus:
- type: nvidia-tesla-k80
count: 1
The TfJob operator would be instantiated with the information it needs to add to the actual JobController specs to use GPUs. This would include adding the volume mounts showed above and scheduling constraints so it gets scheduled on GPU nodes.
from training-operator.
Hey!
If the user is not responsible to mount the drivers, what's your vision to deal with different driver versions, and different install locations?
from training-operator.
PR #9 is out for review.
Its pretty close to what I suggested above. Main difference is we look at container resources and limits to determine if GPUs are required rather than introducing new fields to indicate when GPUs are desired.
from training-operator.
PR #9 is merged.
from training-operator.
Related Issues (20)
- Update developer documentation for arm HOT 1
- Aunpun1.00 HOT 1
- Update pytorch launcher component in Kubeflow Pipelines repository HOT 2
- Update developer guide to handle missing training-operator-webhook-cert HOT 2
- Job Status is failed, when scale-in ps. HOT 4
- Failed K8s nodes leave jobs hanging indefinitely HOT 3
- Update examples for `train` API HOT 1
- [Question] Training Operator v1.8 Release Date HOT 1
- Why manifests/base/service.yaml does not include webhook server port (443) in version 1.7.0~1.5.0? HOT 7
- Not getting Kubeflow Training SDK v1.7 when installing `kubeflow-training` HOT 13
- Flaky Test: [It] should create desired Pods and Services: Distributed TFJob (4 workers, 2 PS) is succeeded
- MPIJob requires service names for the pods. HOT 3
- Add DeepSpeed Example with MPI Operator HOT 8
- chore(style): provide type for `STORAGE_INITIALIZER_VOLUME` constant
- fix(compatability): match-case syntax only compatible with Python3.10 HOT 3
- Export Fine-Tuned LLM after Trainer is Complete HOT 3
- Vulnerability - CVE-2023-44487 HOT 3
- Support ARM64 platform in PyTorch examples HOT 4
- Support ARM64 platform in TensorFlow examples HOT 4
- Support ARM64 platform in XGBoost examples HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from training-operator.