Our goal is to run the distributed tensorflow in a kubernetes cluster. Firstly, we plan to run distributed cifar10 which is included in the models module of tensorflow.
We build a docker image including the image tensorflow:lateset, the models directory and our cifar10_cluster_train.py.
- Make a dir named cifar10 including Dockerfile and dir models.
- cd cifar10
- docker build -t your_docker_account_name/your_repositories_name:tag .
- docker push your_docker_account_name/your_repositories_name:tag Then,we can pull the image we build from our docker hub account when we define rc.yaml.
You can google how to build a nfs cluster according to the os you use.Here are some tips:
- Make sure your firewalld service is turned off and selinux is permissive ("getenforce" on centos).
- If you seek in some trouble hard to deal with ,run"iptables -P FORWARD ACCEPT" maybe amazing.
- kubectl create -f (the files we give)
- kubectl create -f myjob.template # run it on the master host.We define 4 services for 4 pods.
- kubectl get pods -o wide # get the name and status of your pods and which node each pod was scheduled on.
- kubectl describe pods pod_name # get the details information of the pod process.
- kubectl logs pod_name # get logs of the first container in the pod.
- kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod # delete all evicted pods
- kubectl delete -f myjob.template #delete all replicaset and service created by the template.it can save some time for you.
- kubectl exec -it pod_name sh(/bin/bash) # enter terminal of the first container.
- Train_dir is in the dir /tmp/cifar10_train.
- We can get the stdout stream of the container in the dir /media ,for we redirect it to /data/worker+task_inedx in our py file and it mounts on the dir /media which is defined in template.