Comments (30)
Yeah, this is a real issue. I hadn't read the docker pull usage correctly. Now it does docker pull img1 img2 ... but only one argument is allowed.
I will fix this to the next release, but as workaround you have to do docker pull all the images manually.
from kubernetes-on-arm.
After pulling the images, I got:
kube-config enable-worker
Transferring images to system-docker
Copying kubernetesonarm/flannel to system-docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Job for flannel.service failed because the control process exited with error code. See "systemctl status flannel.service" and "journalctl -xe" for details.
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
Starting the worker containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-worker.service to /usr/lib/systemd/system/k8s-worker.service.
Job for k8s-worker.service failed because the control process exited with error code. See "systemctl status k8s-worker.service" and "journalctl -xe" for details.
Worker Kubernetes services enabled
With:
systemctl status flannel.service -l
* flannel.service - Flannel Overlay Network for Kubernetes
Loaded: loaded (/usr/lib/systemd/system/flannel.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2015-10-21 21:19:25 CEST; 1min 57s ago
Oct 21 21:19:22 kubepi2 bash[4779]: cc678c339590: Pull complete
Oct 21 21:19:22 kubepi2 bash[4779]: cc678c339590: Already exists
Oct 21 21:19:22 kubepi2 bash[4779]: Digest: sha256:e9bca63535e786a2ce8a3aef84742ee4079aca54d87f3f03ff1b80714656bf5d
Oct 21 21:19:22 kubepi2 bash[4779]: Status: Downloaded newer image for kubernetesonarm/etcd:latest
Oct 21 21:19:24 kubepi2 bash[4779]: runtime: this CPU has no VFPv3 floating point hardware, so it cannot run
Oct 21 21:19:24 kubepi2 bash[4779]: this GOARM=7 binary. Recompile using GOARM=6.
Oct 21 21:19:25 kubepi2 systemd[1]: flannel.service: Control process exited, code=exited status=1
Oct 21 21:19:25 kubepi2 systemd[1]: Failed to start Flannel Overlay Network for Kubernetes.
Oct 21 21:19:25 kubepi2 systemd[1]: flannel.service: Unit entered failed state.
Oct 21 21:19:25 kubepi2 systemd[1]: flannel.service: Failed with result 'exit-code'.
and:
systemctl status docker.service -l
* docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
`-docker-flannel.conf
Active: failed (Result: start-limit) since Wed 2015-10-21 21:19:35 CEST; 2min 55s ago
Docs: https://docs.docker.com
Main PID: 166 (code=exited, status=0/SUCCESS)
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Oct 21 21:19:43 kubepi2 systemd[1]: Failed to start Docker Application Container Engine.
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Failed with result 'resources'.
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Failed to load environment files: No such file or directory
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Oct 21 21:19:43 kubepi2 systemd[1]: Failed to start Docker Application Container Engine.
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Failed with result 'resources'.
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Start request repeated too quickly.
Oct 21 21:19:43 kubepi2 systemd[1]: Failed to start Docker Application Container Engine.
Oct 21 21:19:43 kubepi2 systemd[1]: docker.service: Failed with result 'start-limit'.
and:
systemctl status k8s-worker.service -l
* k8s-worker.service - The Master Components for Kubernetes
Loaded: loaded (/usr/lib/systemd/system/k8s-worker.service; enabled; vendor preset: disabled)
Active: activating (start-pre) since Wed 2015-10-21 21:23:29 CEST; 376ms ago
Process: 5817 ExecStop=/usr/bin/docker stop k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
Process: 5806 ExecStartPost=/usr/bin/docker run -d --name=k8s-worker-proxy --net=host --privileged kubernetesonarm/hyperkube /hyperkube proxy --master=http://${K8S_MASTER_IP}:8080 --v=2 (code=exited, status=1/FAILURE)
Process: 5805 ExecStart=/bin/bash -c source /etc/kubernetes/k8s.conf; exec docker run --name=k8s-worker --net=host -v /var/run/docker.sock:/var/run/docker.sock kubernetesonarm/hyperkube /hyperkube kubelet --pod_infra_container_image=kubernetesonarm/pause --api-servers=http://${K8S_MASTER_IP}:8080 --v=2 --address=127.0.0.1 --enable-server --hostname-override=$(/usr/bin/hostname -i | /usr/bin/awk '{print $1}') --cluster-dns=10.0.0.10 --cluster-domain=cluster.local (code=exited, status=1/FAILURE)
Process: 5798 ExecStartPre=/usr/bin/docker rm k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
Main PID: 5805 (code=exited, status=1/FAILURE); : 5823 (docker)
CGroup: /system.slice/k8s-worker.service
`-control
`-5823 /usr/bin/docker kill k8s-worker k8s-worker-proxy
Oct 21 21:23:29 kubepi2 systemd[1]: Starting The Master Components for Kubernetes...
from kubernetes-on-arm.
And after a reboot, docker no longer works:
[root@kubepi2 ~]# systemctl status docker -l
* docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
`-docker-flannel.conf
Active: failed (Result: start-limit)
Docs: https://docs.docker.com
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Oct 21 21:31:39 kubepi2 systemd[1]: Failed to start Docker Application Container Engine.
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Failed with result 'resources'.
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Failed to load environment files: No such file or directory
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Oct 21 21:31:39 kubepi2 systemd[1]: Failed to start Docker Application Container Engine.
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Failed with result 'resources'.
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Start request repeated too quickly.
Oct 21 21:31:39 kubepi2 systemd[1]: Failed to start Docker Application Container Engine.
Oct 21 21:31:39 kubepi2 systemd[1]: docker.service: Failed with result 'start-limit'.
from kubernetes-on-arm.
Oh, I thought go automatically compiled using GOARM=6
...
Well, I must set GOARM on every compile then for this to work on RPi 1.
I haven't had time for testing everything on my RPi 1, so I didn't catch this issue before.
But, unfornately it seems like you can't use your RPi 1, until I've patched this.
These are the relevant lines, which causes the whole chain to fail.
You can do kube-config disable-machine
to revert.
disable-machine
may work, but in future releases the command will be disable
or disable-node
Always check the usage first
Oct 21 21:19:24 kubepi2 bash[4779]: runtime: this CPU has no VFPv3 floating point hardware, so it cannot run
Oct 21 21:19:24 kubepi2 bash[4779]: this GOARM=7 binary. Recompile using GOARM=6.
I know this bug can be a little frustrating, but this project is only in beta and under development, so one may expect bugs to be present.
Your testing is incredible appreciated. 😄
I will setup some kind of autotests in the future.
from kubernetes-on-arm.
No problem, I know the status of the project and I am happy to help :)
Let me know what can I do ; if you need me to make some compilation, just give me instructions !
from kubernetes-on-arm.
@nsteinmetz I tried to compile flannel
with GOARM=6
and got these errors when running on a Pi 1:
SIGILL: illegal instruction
PC=0x55148
goroutine 1 [running, locked to thread]:
math.init·1()
/goroot/src/math/pow10.go:34 +0x20 fp=0x10833e98 sp=0x10833e94
math.init()
/goroot/src/math/unsafe.go:21 +0x5c fp=0x10833e9c sp=0x10833e98
reflect.init()
/goroot/src/reflect/value.go:2443 +0x5c fp=0x10833ebc sp=0x10833e9c
fmt.init()
/goroot/src/fmt/scan.go:1169 +0x60 fp=0x10833f08 sp=0x10833ebc
compress/gzip.init()
/goroot/src/compress/gzip/gzip.go:272 +0x5c fp=0x10833f24 sp=0x10833f08
net/http.init()
/goroot/src/net/http/transport.go:1275 +0x64 fp=0x10833fa8 sp=0x10833f24
github.com/coreos/flannel/Godeps/_workspace/src/github.com/coreos/etcd/pkg/transport.init()
/flannel-0.5.3/gopath/src/github.com/coreos/flannel/Godeps/_workspace/src/github.com/coreos/etcd/pkg/transport/timeout_transport.go:43 +0x5c fp=0x10833fac sp=0x10833fa8
github.com/coreos/flannel/subnet.init()
/flannel-0.5.3/gopath/src/github.com/coreos/flannel/subnet/watch.go:146 +0x5c fp=0x10833fbc sp=0x10833fac
main.init()
/flannel-0.5.3/gopath/src/github.com/coreos/flannel/version.go:17 +0x5c fp=0x10833fc0 sp=0x10833fbc
runtime.main()
/goroot/src/runtime/proc.go:58 +0xf8 fp=0x10833fe4 sp=0x10833fc0
runtime.goexit()
/goroot/src/runtime/asm_arm.s:1322 +0x4 fp=0x10833fe4 sp=0x10833fe4
goroutine 2 [runnable]:
runtime.forcegchelper()
/goroot/src/runtime/proc.go:90
runtime.goexit()
/goroot/src/runtime/asm_arm.s:1322 +0x4
trap 0x6
error 0x0
oldmask 0x0
r0 0x5fb3b0
r1 0x10832200
r2 0xc
r3 0xc
r4 0xfffffade
r5 0x1570
r6 0x1c
r7 0x7
r8 0x1083601c
r9 0x0
r10 0x108000a0
fp 0x5f857d
ip 0x0
sp 0x10833e94
lr 0x5525c
pc 0x55148
cpsr 0x20000010
fault 0x0
Also tried to run flannel
with GOARM=7 and got the same as you:
runtime: this CPU has no VFPv3 floating point hardware, so it cannot run
this GOARM=7 binary. Recompile using GOARM=6.
Both of these binaries run outside of docker.
In the worst case, two builds is required: one on armv6
and one on armv7
.
I will try to compile this again and run it inside of docker to test.
As far as I understand, go
should be able to compile and run armv6
binaries on armv7
and up too, but obviously, the armv6
binary won't take advantage of new armv7
features like VFPv3
and so on.
GOARM docs
I think the problem might be something with the build env or the dependencies (flannel
is dynamically built) or flannel
itself. Will try with etcd
also.
If you want to test to build this yourself, run docker run -it luxas/go /bin/bash
, set K8S_VERSION and so on and type the commands in this file
BTW, the pull issue should be resolved now, but haven't uploaded it yet
from kubernetes-on-arm.
Ok, I'll try over the weekend.
from kubernetes-on-arm.
I couldn't run your luxas/go container:
docker run -it luxas/go /bin/bash
Error response from daemon: Cannot start container 3b9b41e0f621bf8d32ab97d5f3ab51cb4afc0f67141fd9514e64b15ef49c33cf: [8] System error: open /sys/fs/cgroup/cpu,cpuacct/init.scope/system.slice/docker-3b9b41e0f621bf8d32ab97d5f3ab51cb4afc0f67141fd9514e64b15ef49c33cf.scope/cpu.shares: no such file or directory
Seems a fix here (moby/moby#16256) to add in /etc/systemd/system/multi-user.target.wants/docker.service and add "--exec-opt native.cgroupdriver=cgroupfs"
ExecStart=/usr/bin/docker -d --exec-opt native.cgroupdriver=cgroupfs -H fd://
Now I can run it...
from kubernetes-on-arm.
Why should I set K8S_VERSION as it's defined in the source /versions.sh step ?
I lack dns resolution in my container, need to see why...
from kubernetes-on-arm.
Which version are you using?
The cgroupfs
issue was fixed in 1ddec8c and 6f33dd9, which was present in the v0.5.5 release.
With the variables I just meant that do not try to do e.g.
curl -sSL -k https://github.com/coreos/flannel/archive/$FLANNEL_VERSION.tar.gz | tar -C /build -xz
Instead replace $FLANNEL_VERSION with v0.5.3, like this:
curl -sSL -k https://github.com/coreos/flannel/archive/v0.5.3.tar.gz | tar -C /build -xz
You'll find the latest working versions in images/version.sh
What are you meaning with dns resolution in the container?
from kubernetes-on-arm.
My RPI was still on v0.5.5 ; so it explains the issue. Will move all of them to 0.5.6 or dev branch and start again.
Regarding dns resolution, the curl command fails:
root@1ffd2312fc3e:/# curl -sSL -k https://github.com/coreos/etcd/archive/$ETCD_VERSION.tar.gz | tar -C /build -xz
curl: (6) Could not resolve host: github.com
Whereas from the host, it works well.
from kubernetes-on-arm.
I have no idea why the network doesn't work.
Try to restart docker or something. Reboots often work :)
from kubernetes-on-arm.
Still the same error from a vanilla instance ; maybe some bridging is missing ?
from kubernetes-on-arm.
Try to start the container with --net=host
.
That won't bridge the container via the docker0
interface.
I assume you have run kube-config disable
.
And you also may try to run:
ifconfig docker0 down
brctl delbr docker0
systemctl restart docker
from kubernetes-on-arm.
I'm making some progress with running this on Pi 1.
I'll make a pre-release (v0.5.9) soon, when everything is compiled for both platforms and with go1.5.1
I got etcd
, flannel
and pause
at least running on a Raspberry Pi 1.
from kubernetes-on-arm.
Good news ! Let me know when I can test it.
As a side node, I was also wondering about the typology of my k8s cluster: should I use a RPI1 as master and Cubietrucks as node or the opposite or a mix of both. As Cubietruck > RPI in terms of CPU/RAM/Network, wonder what's the best architecture to deploy. Will also read k8s doc to see if there are some recommendations on that.
from kubernetes-on-arm.
Seems like go1.5.1
doesn't work at all: kubernetes/kubernetes#16857
About how to deploy: I haven't really figured out yet which config is the best one. The only thing I know is that if you have a large cluster, then you'll need a really fast master node.
In the future, one may think if it's possible to run this in HA Mode
from kubernetes-on-arm.
Argh too bad :(
Coud you have etcd, flannel and pause compiled with go 1.5 and kubernetes with go 1.4 ? Does it make sense and could it work?
And thanks for the link on infra sizing & HA
from kubernetes-on-arm.
Yeah, its a great idea, but it would make the build process a lot more complex :/
Everything except for registry and kubernetes compiles on go1.5.1
On 05 Nov 2015, at 21:35, Nicolas Steinmetz [email protected] wrote:
Argh too bad :(
Coud you have etcd, flannel and pause compiled with go 1.5 and kubernetes with go 1.4 ? Does it make sense and could it work?
—
Reply to this email directly or view it on GitHub.
from kubernetes-on-arm.
Have you tested v0.6.0
on RPi 1?
If it works, we may close this.
from kubernetes-on-arm.
It's part of my evening home work!
Envoyé de mon appareil Android avec K-9 Mail. Veuillez excuser ma brièveté.
from kubernetes-on-arm.
Hi,
We're almost there !
Seems I have the same on rpi too:
[root@rpi1 ~]# kube-config enable-worker
Disabling k8s if it is running
What is the Master IP? It isn't specified or reachable at the moment. 192.168.8.110
Checks so all images are present
Downloading Kubernetes docker images from Github
Transferring images to system-docker, if necessary
Copying kubernetesonarm/flannel to system-docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
Starting the worker containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-worker.service to /usr/lib/systemd/system/k8s-worker.service.
Job for k8s-worker.service failed because a timeout was exceeded. See "systemctl status k8s-worker.service" and "journalctl -xe" for details.
Worker Kubernetes services enabled
And as output:
[root@rpi1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f691948505ff kubernetesonarm/hyperkube "/hyperkube kubelet -" About a minute ago Up About a minute k8s-worker
30080073d956 kubernetesonarm/hyperkube "/hyperkube proxy --m" About a minute ago Up About a minute k8s-worker-proxy
[root@rpi1 ~]# systemctl status k8s-worker -l
* k8s-worker.service - The Worker Components for Kubernetes
Loaded: loaded (/usr/lib/systemd/system/k8s-worker.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2015-12-02 21:42:04 CET; 1min 37s ago
Process: 21874 ExecStartPost=/usr/bin/docker run -d --name=k8s-worker-proxy --net=host --privileged kubernetesonarm/hyperkube /hyperkube proxy --master=http://${K8S_MASTER_IP}:8080 --v=2 (code=exited, status=0/SUCCESS)
Process: 21870 ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/static/worker (code=exited, status=0/SUCCESS)
Process: 21860 ExecStartPre=/usr/bin/docker rm k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
Process: 21852 ExecStartPre=/usr/bin/docker kill k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
Main PID: 21873 (docker)
CGroup: /system.slice/k8s-worker.service
`-21873 docker run --name=k8s-worker --net=host -v /etc/kubernetes/static/worker:/etc/kubernetes/manifests -v /:/rootfs:ro -v /sys:/sys:ro -v /dev:/dev -v /var/lib/docker/:/var/lib/docker:rw -v /var/lib/kubelet:/var/lib/kubelet:rw -v /var/run:/var/run:rw --privileged --pid=host kubernetesonarm/hyperkube /hyperkube kubelet --allow-privileged=true --containerized --pod_infra_container_image=kubernetesonarm/pause --api-servers=http://192.168.8.110:8080 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --v=2 --address=127.0.0.1 --enable-server --hostname-override=192.168.8.100 --config=/etc/kubernetes/manifests
Dec 02 21:43:13 rpi1 bash[21873]: I1202 20:43:13.908975 21992 factory.go:236] Registering Docker factory
Dec 02 21:43:14 rpi1 bash[21873]: I1202 20:43:14.271724 21992 factory.go:93] Registering Raw factory
Dec 02 21:43:18 rpi1 bash[21873]: I1202 20:43:18.031733 21992 manager.go:1006] Started watching for new ooms in manager
Dec 02 21:43:20 rpi1 bash[21873]: I1202 20:43:20.740207 21992 oomparser.go:183] oomparser using systemd
Dec 02 21:43:20 rpi1 bash[21873]: I1202 20:43:20.817548 21992 manager.go:250] Starting recovery of all containers
Dec 02 21:43:22 rpi1 bash[21873]: I1202 20:43:22.545961 21992 manager.go:255] Recovery completed
Dec 02 21:43:24 rpi1 bash[21873]: I1202 20:43:24.797495 21992 container_manager_linux.go:179] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
Dec 02 21:43:24 rpi1 bash[21873]: I1202 20:43:24.901176 21992 container_manager_linux.go:215] Configure resource-only container /docker-daemon with memory limit: 317046374
Dec 02 21:43:24 rpi1 bash[21873]: I1202 20:43:24.905473 21992 manager.go:104] Starting to sync pod status with apiserver
Dec 02 21:43:24 rpi1 bash[21873]: I1202 20:43:24.905746 21992 kubelet.go:1953] Starting kubelet main sync loop.
[root@rpi1 ~]# systemctl status docker -l
* docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
`-docker-flannel.conf
Active: active (running) since Wed 2015-12-02 21:40:27 CET; 3min 21s ago
Docs: https://docs.docker.com
Process: 16874 ExecStartPre=/usr/bin/brctl delbr docker0 (code=exited, status=0/SUCCESS)
Process: 16869 ExecStartPre=/usr/bin/ifconfig docker0 down (code=exited, status=0/SUCCESS)
Main PID: 16878 (docker)
CGroup: /system.slice/docker.service
|-16878 /usr/bin/docker -d -H unix:///var/run/docker.sock -s overlay --bip=10.1.35.1/24 --mtu=1472 --insecure-registry=registry.kube-system.svc.cluster.local:5000 --insecure-registry=10.0.0.20:5000 --insecure-registry=registry.kube-system:5000 --exec-opt native.cgroupdriver=cgroupfs
|-21891 /hyperkube proxy --master=http://192.168.8.110:8080 --v=2
|-21992 /hyperkube kubelet --allow-privileged=true --containerized --pod_infra_container_image=kubernetesonarm/pause --api-servers=http://192.168.8.110:8080 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --v=2 --address=127.0.0.1 --enable-server --hostname-override=192.168.8.100 --config=/etc/kubernetes/manifests
`-22478 journalctl -k -f
Dec 02 21:43:44 rpi1 docker[16878]: time="2015-12-02T21:43:44.918710856+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:45 rpi1 docker[16878]: time="2015-12-02T21:43:45.010146926+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:45 rpi1 docker[16878]: time="2015-12-02T21:43:45.271768543+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:45 rpi1 docker[16878]: time="2015-12-02T21:43:45.353589921+01:00" level=info msg="GET /version"
Dec 02 21:43:45 rpi1 docker[16878]: time="2015-12-02T21:43:45.484415729+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:45 rpi1 docker[16878]: time="2015-12-02T21:43:45.701925760+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:46 rpi1 docker[16878]: time="2015-12-02T21:43:46.130554028+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:46 rpi1 docker[16878]: time="2015-12-02T21:43:46.465711292+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:46 rpi1 docker[16878]: time="2015-12-02T21:43:46.708862504+01:00" level=info msg="GET /containers/json"
Dec 02 21:43:47 rpi1 docker[16878]: time="2015-12-02T21:43:47.038400949+01:00" level=info msg="GET /containers/json"
[root@rpi1 ~]# systemctl status flannel -l
* flannel.service - Flannel Overlay Network for Kubernetes
Loaded: loaded (/usr/lib/systemd/system/flannel.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2015-12-02 21:39:31 CET; 4min 27s ago
Main PID: 16773 (docker)
CGroup: /system.slice/flannel.service
`-16773 /usr/bin/docker -H unix:///var/run/system-docker.sock run --name=k8s-flannel --net=host --privileged -v /dev/net:/dev/net -v /var/lib/kubernetes/flannel:/run/flannel kubernetesonarm/flannel /flanneld --etcd-endpoints=http://192.168.8.110:4001
Dec 02 21:39:31 rpi1 docker[16753]: Error: failed to remove containers: [k8s-flannel]
Dec 02 21:39:31 rpi1 systemd[1]: Started Flannel Overlay Network for Kubernetes.
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:11.898254 00001 main.go:275] Installing signal handlers
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:11.902535 00001 main.go:130] Determining IP address of default interface
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:11.912136 00001 main.go:188] Using 192.168.8.100 as external interface
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:11.917213 00001 main.go:189] Using 192.168.8.100 as external endpoint
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:11.946386 00001 etcd.go:204] Picking subnet in range 10.1.1.0 ... 10.1.255.0
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:11.970682 00001 etcd.go:84] Subnet lease acquired: 10.1.35.0/24
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:23.817700 00001 udp.go:222] Watching for new subnet leases
Dec 02 21:40:24 rpi1 docker[16773]: I1202 20:40:23.882824 00001 udp.go:247] Subnet added: 10.1.28.0/24
[root@rpi1 ~]# systemctl status etcd -l
* etcd.service - Etcd Master Data Store for Kubernetes Apiserver
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Dec 02 21:15:52 rpi1 systemd[1]: Stopped Etcd Master Data Store for Kubernetes Apiserver.
from kubernetes-on-arm.
Could you edit the comment so the ``` gets in the right place?
What do you mean is wrong? This?
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
from kubernetes-on-arm.
Exactly,
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
Starting the worker containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-worker.service to /usr/lib/systemd/system/k8s-worker.service.
Job for k8s-worker.service failed because a timeout was exceeded. See "systemctl status k8s-worker.service" and "journalctl -xe" for details.
Worker Kubernetes services enabled
Disable & re-enabling worker does not change the result:
[root@rpi1 ~]# kube-config disable-node
Removed symlink /etc/systemd/system/multi-user.target.wants/k8s-worker.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/flannel.service.
[root@rpi1 ~]# kube-config enable-worker
Disabling k8s if it is running
Checks so all images are present
Transferring images to system-docker, if necessary
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
Starting the worker containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-worker.service to /usr/lib/systemd/system/k8s-worker.service.
Worker Kubernetes services enabled
So is it just about restarting the docker/k8s-worker services or is there something wrong ?
As systemctl status -l reports no errors
from kubernetes-on-arm.
# Enable and start our bootstrap services
systemctl enable flannel
systemctl start flannel
# Wait for flannel
sleep 5 # <---- maybe too small number on a "slow" Pi 1
# Create a symlink to the dropin location, so docker will use flannel
dropins-enable-flannel
Could you see if you find some interesting things in journalctl -xeu docker
?
Looks like that flannel didn't have time enough to init, so docker failed.
However, it isn't serious because they restart automatically, but anyway.
from kubernetes-on-arm.
For docker, I have:
-- Logs begin at Thu 1970-01-01 01:00:07 CET, end at Wed 2015-12-02 22:11:04 CET. --
Dec 02 21:59:54 rpi1 systemd[1]: docker.service: Unit entered failed state.
Dec 02 21:59:54 rpi1 systemd[1]: docker.service: Failed with result 'resources'.
Dec 02 21:59:56 rpi1 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Dec 02 21:59:56 rpi1 systemd[1]: Stopped Docker Application Container Engine.
-- Subject: Unit docker.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has finished shutting down.
Dec 02 21:59:56 rpi1 systemd[1]: docker.service: Failed to load environment files: No such file or directory
Dec 02 21:59:56 rpi1 systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Dec 02 21:59:56 rpi1 systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
from kubernetes-on-arm.
Yes, I'll increase the timeout to 8 secs for now.
However, if I have time, I'll maybe create a loop that checks for the flannel subnet file, and exits after that.
I'll close this now.
Complain loudly if you disagree 😄
from kubernetes-on-arm.
Ok, let's go this way :)
from kubernetes-on-arm.
Quickly tested with sleep 20
and seems it works like a charm :)
from kubernetes-on-arm.
OK, thanks for following this up :)
from kubernetes-on-arm.
Related Issues (20)
- error restarting haproxy -- ./haproxy_reload: line 26: syntax error: unexpected redirection HOT 2
- kube-config enable-worker can't find master... HOT 11
- Kubeadm with unstable deb repo HOT 8
- $TMPDIR is not synced to the SDCard.
- Cluster state taking a long time to update... HOT 2
- Accessing host instance devices... HOT 3
- Unable to bring up services after the leader reboot- The connection to the server localhost:8080 was refused
- kube-config install remains blocked
- Raspberry Pi Kubernetes HOT 2
- registry add-on
- "kubeadm getting started guide" link in README.md is broken
- DNS & flannel not starting HOT 3
- Networking issues with Docker 1.13 HOT 3
- Packages for Jessie
- Where is the Prometheus image? HOT 1
- problem with hpa in multiplatform arch with arm and amd64
- This scripts work ? HOT 1
- kubernetes pods stuck at Container creating
- Failed to create pod sandbox
- kubectl issue
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kubernetes-on-arm.