GithubHelp home page GithubHelp logo

luxas / kubernetes-on-arm Goto Github PK

View Code? Open in Web Editor NEW
592.0 45.0 88.0 1011 KB

Kubernetes ported to ARM boards like Raspberry Pi.

License: MIT License

Shell 90.63% HTML 0.19% Nginx 1.70% Makefile 7.48%
kubernetes arm raspberry-pi arm-devices kubeadm

kubernetes-on-arm's Introduction

Welcome to the Kubernetes on ARM project!

Kubernetes on a Raspberry Pi? Is that possible?

Yes, now it is (and has been since v1.0.1 with this project)

Imagine... Your own testbed for Kubernetes with cheap Raspberry Pis and friends.

Image of Kubernetes and Raspberry Pi

Are you convinced too, like me, that cheap ARM boards and Kubernetes is a match made in heaven?

Then, lets go!

Important information

This project was published in September 2015 as the first fully working way to easily set up Kubernetes on ARM devices.

You can read my story here.

I worked on making it better non-stop until early 2016, when I started contributing the changes I've made back to Kubernetes core. I strongly think that most of these features belong to the core, so everyone may take advantage of it, and so Kubernetes can be ported to even more platforms.

So I opened kubernetes/kubernetes#17981 and started working on making Kubernetes cross-platform. To date I've ported the Kubernetes core to ARM, ARM 64-bit and PowerPC 64-bit Little-endian. Already in v1.2.0, binaries were released for ARM, and I used the official binaries in v0.7.0 in Kubernetes on ARM.

Since v1.3.0 the hyperkube image has been built for both arm and arm64, which have made it possible to run Kubernetes officially the "kick the tires way". So it has been possible to run v1.3.x Kubernetes on Raspberry Pi´s (or whatever arm or arm64 device that runs docker) with the docker-multinode deployment. However, docker-multinode has been deprecated and removed, and shouldn'be be used anymore.

I've written a proposal about how to make Kubernetes available for multiple platforms here

Then I also ported kubeadm to arm and arm64, and kubeadm is so much better than the docker-multinode deployment method I used earlier (before the features that kubeadm takes advantage of existed).

So now the officially recommended and supported way of running Kubernetes on ARM is by following the kubeadm getting started guide. Since I've moved all the features this project had into the core, there's no big need for this project anymore.

Get your ARM device up and running Kubernetes in less than ten minutes

I have a workshop how to create a Kubernetes cluster on ARM here now: https://github.com/luxas/kubeadm-workshop. Please look there for information how to create a Kubernetes cluster on ARM or look at the kubeadm getting started guide.

Various related resources

kubernetes-on-arm's People

Contributors

doriangray avatar kimlehtinen avatar kyletravis avatar larmog avatar luxas avatar madmas avatar mathiasrenner avatar oriyosefimsft avatar smcquaid avatar wombat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-on-arm's Issues

[0.6.0 - Cubietruck] kube-config enable master fails

It failed on first init like we had previously when flannel/etcd failed to init properly

I could trap at least:

[root@kube1 ~]# systemctl status -l etcd
* etcd.service - Etcd Master Data Store for Kubernetes Apiserver
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2015-12-01 21:09:38 CET; 4min 58s ago
 Main PID: 734 (code=exited, status=0/SUCCESS)

Dec 01 21:09:03 kube1 docker[734]: 2015-12-01 20:09:03.824524 N | etcdserver: set the initial cluster version to 2.2
Dec 01 21:09:03 kube1 docker[734]: 2015-12-01 20:09:03.825445 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:4001]} to cluster 7e27652122e8b2ae
Dec 01 21:09:30 kube1 bash[735]: Error:  client: etcd cluster is unavailable or misconfigured
Dec 01 21:09:31 kube1 bash[735]: error #0: client: endpoint http://127.0.0.1:4001 exceeded header timeout
Dec 01 21:09:35 kube1 systemd[1]: etcd.service: Control process exited, code=exited status=4
Dec 01 21:09:35 kube1 docker[734]: 2015-12-01 20:09:35.929778 N | osutil: received terminated signal, shutting down...
Dec 01 21:09:38 kube1 docker[780]: k8s-etcd
Dec 01 21:09:38 kube1 systemd[1]: Failed to start Etcd Master Data Store for Kubernetes Apiserver.
Dec 01 21:09:38 kube1 systemd[1]: etcd.service: Unit entered failed state.
Dec 01 21:09:38 kube1 systemd[1]: etcd.service: Failed with result 'exit-code'.

Then flannel fails as etcd is not there and the rest too.

I disabled node via kube-config disable-node, checked that all docker images are there (they are in lastest and 0.6.0 versions) and re-run kube-config enable-master which was successful this time.

Will double check on next cubie.

Scaleway

I feel compelled to get this working on scaleway. This of course blocks me from using a "flash your sdcard" deploy. Any thoughts on where I might get started?

Thanks!

kubectl: command not found

I successfully started the master node.

[root@pi1 ~]# docker ps
CONTAINER ID        IMAGE                       COMMAND                CREATED              STATUS              PORTS               NAMES
fdc7e6e5722a        kubernetesonarm/hyperkube   "/hyperkube schedule   21 seconds ago       Up 20 seconds                           k8s_scheduler.e5efe276_k8s-master-192.168.10.101_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_a5a8ae29
93ebc0b6ceac        kubernetesonarm/hyperkube   "/hyperkube apiserve   22 seconds ago       Up 20 seconds                           k8s_apiserver.c358020f_k8s-master-192.168.10.101_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_df2822eb
e57bf6126cb0        kubernetesonarm/hyperkube   "/hyperkube controll   22 seconds ago       Up 21 seconds                           k8s_controller-manager.7047e990_k8s-master-192.168.10.101_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_e12a78d6
a6ba8833893a        kubernetesonarm/pause       "/pause"               32 seconds ago       Up 31 seconds                           k8s_POD.7ad6c339_k8s-master-192.168.10.101_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_4b992c87
d0eb1acfa2cd        kubernetesonarm/hyperkube   "/hyperkube kubelet    About a minute ago   Up About a minute                       k8s-master
6bb0ad12d67d        kubernetesonarm/hyperkube   "/hyperkube proxy --   About a minute ago   Up About a minute                       k8s-worker-proxy

But, when I try to use Kubernetes it got this error...

[root@pi1 ~]# kubectl run my-nginx --image=luxas/nginx-test --replicas=3
-bash: kubectl: command not found

Am I missing something?

Script downloads new image for every install.

sdcard/write.sh downloads a new image every time you run it. If you are installing the image on a large number of SD cards in one go, /etc/tmp fills up quite rapidly with redundant copies of the image. It also adds a large amount of time to the installation process. Having it only download an image if one does not already exist would be a nice change.

kube-config upgrade issue

Hello,
Great project, been following for a while!
Unsure if this is myself, I'm new to this and learning. However, I've just done a kibe-config upgrade command which resulted in the failed updates

'kube-config upgrade
Upgrading the system
:: Synchronizing package databases...
error: failed retrieving file 'core.db' from mirror.archlinuxarm.org : Resolving timed out after 10522 milliseconds
error: failed to update core (download library error)'

Everything else ran perfectly and then I ran 'docker ps', which results in the error:
Error response from daemon: client is newer than server (client API version: 1.21, server API version: 1.19)

Would this be due to the error upgrading the system? Apologies, pretty new to all of this :)

Thank you.

verify image pull policy

I noticed that the image pull policy is not set in the static pod yamls. The images pulled also don't have specific tags. IIRC, this means that Kubernetes may be applying the default pull always policy.

Would it be possible to do either one of the following?

  • specify the image pull policy
  • tag the docker images and specify the tags in the pod yamls?

support for Odroid C1

Hello has odroid c1 been supported. I thought it could be quite easier.
It's like the hypriotos guys have added it to their supported platforms.
Meanwhile thumbs up luxas, and when are we seeing the release with the new dashboard and heapster.

dev branch error: updatefile: command not found

Happy New Year's!

I ran into an issue today when deploying the dev branch from scratch.

[root@kube-master ~]# kube-config enable-addon dns
namespace "kube-system" created
/etc/kubernetes/dynamic-env/os/archlinux.sh: line 59: updatefile: command not found

A quick edit to these two lines in /etc/kubernetes/dynamic-env/os/archlinux.sh changing updatefile to updateline did the trick

[root@kube-master conf]# grep update /etc/kubernetes/dynamic-env/os/archlinux.sh
    updateline /etc/systemd/network/dns.network "Domains" "Domains=default.svc.$DNS_DOMAIN svc.$DNS_DOMAIN $DNS_DOMAIN"
    updateline /etc/systemd/network/dns.network "DNS" "DNS=$DNS_IP;"

After the mod, all is well:

[root@kube-master ~]# kube-config enable-addon dns
replicationcontroller "kube-dns-v8" created
service "kube-dns" created
Started addon: dns

[root@kube-master ~]# curl my-nginx.default.svc.cluster.local
<p>WELCOME TO NGINX</p>

[0.6.2-Cubietruck] kube-config enable-worker [master-ip] does not work

Hi,

On master side, everything is fine so far but on worker side, I can't override the 127.0..0.1, seems the argument is not taken into account.

[root@kube2 ~]# kube-config enable-worker 192.168.8.110
Disabling k8s if it is running
Removed symlink /etc/systemd/system/multi-user.target.wants/flannel.service.
Using master ip: 127.0.0.1
curl: (28) Resolving timed out after 5516 milliseconds
Kubernetes Master IP is required.
Value right now: 127.0.0.1
Exiting...

Command:
kube-config enable-worker [master-ip]
Checks so all images are present
Transferring images to system-docker, if necessary
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.

If I edit manually /etc/kubernetes/k8s.conf file, then kube-config enable-worker works as expected.

What is also weird I think is that if it did not find the master node, it should just exit and should not try to launch the process which of course fails. You should have an exit statement line 393 of kube-config.sh file

[0.6.0 - RPI1] Worker does not survive a reboot

So worker was corretly initialised, then I made a reboot to check if it restarts correctly and it failed.

[root@rpi1 ~]# docker ps
An error occurred trying to connect: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.21/containers/json: read unix @->/var/run/docker.sock: read: connection reset by peer
[root@rpi1 ~]# systemctl status docker -l
* docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           `-docker-flannel.conf
   Active: activating (auto-restart) (Result: resources) since Wed 2015-12-02 22:44:17 CET; 374ms ago
     Docs: https://docs.docker.com

Dec 02 22:44:17 rpi1 systemd[1]: docker.service: Unit entered failed state.
Dec 02 22:44:17 rpi1 systemd[1]: docker.service: Failed with result 'resources'.
Warning: docker.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[root@rpi1 ~]# systemctl status flannel -l
* flannel.service - Flannel Overlay Network for Kubernetes
   Loaded: loaded (/usr/lib/systemd/system/flannel.service; enabled; vendor preset: disabled)
   Active: failed (Result: timeout) since Wed 2015-12-02 22:39:40 CET; 5min ago
  Process: 229 ExecStartPre=/usr/bin/mkdir -p /var/lib/kubernetes/flannel (code=exited, status=0/SUCCESS)
  Process: 226 ExecStartPre=/usr/bin/rm -rf /var/lib/kubernetes/flannel (code=exited, status=0/SUCCESS)
  Process: 219 ExecStartPre=/usr/bin/docker -H unix:///var/run/system-docker.sock rm k8s-flannel (code=exited, status=0/SUCCESS)
  Process: 160 ExecStartPre=/usr/bin/docker -H unix:///var/run/system-docker.sock kill k8s-flannel (code=exited, status=2)

Dec 02 22:37:42 rpi1 systemd[1]: Starting Flannel Overlay Network for Kubernetes...
Dec 02 22:39:37 rpi1 systemd[1]: flannel.service: Start-pre operation timed out. Terminating.
Dec 02 22:39:40 rpi1 docker[219]: k8s-flannel
Dec 02 22:39:40 rpi1 systemd[1]: Failed to start Flannel Overlay Network for Kubernetes.
Dec 02 22:39:40 rpi1 systemd[1]: flannel.service: Unit entered failed state.
Dec 02 22:39:40 rpi1 systemd[1]: flannel.service: Failed with result 'timeout'.
[root@rpi1 ~]# systemctl status k8s-worker -l
* k8s-worker.service - The Worker Components for Kubernetes
   Loaded: loaded (/usr/lib/systemd/system/k8s-worker.service; enabled; vendor preset: disabled)
   Active: activating (start-post) since Wed 2015-12-02 22:45:45 CET; 9s ago
  Process: 1187 ExecStop=/usr/bin/docker stop k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
  Process: 1228 ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/static/worker (code=exited, status=0/SUCCESS)
  Process: 1217 ExecStartPre=/usr/bin/docker rm k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
  Process: 1207 ExecStartPre=/usr/bin/docker kill k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
 Main PID: 1232 (bash);         : 1233 (docker)
   CGroup: /system.slice/k8s-worker.service
           |-1232 /bin/bash -c source /etc/kubernetes/k8s.conf; exec docker run                --name=k8s-worker                        --net=host                          -v /etc/kubernetes/static/worker:/etc/kubernetes/manifests              -v /:/rootfs:ro                        -v /sys:/sys:ro                        -v /dev:/dev                         -v /var/lib/docker/:/var/lib/docker:rw                   -v /var/lib/kubelet:/var/lib/kubelet:rw                  -v /var/run:/var/run:rw                      --privileged                         --pid=host                          kubernetesonarm/hyperkube /hyperkube kubelet                  --allow-privileged=true                      --containerized                        --pod_infra_container_image=kubernetesonarm/pause               --api-servers=http://192.168.8.110:8080                  --cluster-dns=10.0.0.10                      --cluster-domain=cluster.local                     --v=2                           --address=127.0.0.1                       --enable-server                        --hostname-override=$(/usr/bin/hostname -i | /usr/bin/awk '{print $1}')          --config=/etc/kubernetes/manifests
           |-1234 /bin/bash -c source /etc/kubernetes/k8s.conf; exec docker run                --name=k8s-worker                        --net=host                          -v /etc/kubernetes/static/worker:/etc/kubernetes/manifests              -v /:/rootfs:ro                        -v /sys:/sys:ro                        -v /dev:/dev                         -v /var/lib/docker/:/var/lib/docker:rw                   -v /var/lib/kubelet:/var/lib/kubelet:rw                  -v /var/run:/var/run:rw                      --privileged                         --pid=host                          kubernetesonarm/hyperkube /hyperkube kubelet                  --allow-privileged=true                      --containerized                        --pod_infra_container_image=kubernetesonarm/pause               --api-servers=http://192.168.8.110:8080                  --cluster-dns=10.0.0.10                      --cluster-domain=cluster.local                     --v=2                           --address=127.0.0.1                       --enable-server                        --hostname-override=$(/usr/bin/hostname -i | /usr/bin/awk '{print $1}')          --config=/etc/kubernetes/manifests
           |-1235 /usr/bin/hostname -i
           |-1236 /usr/bin/awk {print $1}
           `-control
             `-1233 /usr/bin/docker run -d --name=k8s-worker-proxy --net=host --privileged kubernetesonarm/hyperkube /hyperkube proxy --master=http://192.168.8.110:8080 --v=2

Dec 02 22:45:45 rpi1 systemd[1]: Starting The Worker Components for Kubernetes...
Dec 02 22:45:47 rpi1 docker[1207]: An error occurred trying to connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.21/containers/k8s-worker/kill?signal=KILL: read unix @->/var/run/docker.sock: read: connection reset by peer
Dec 02 22:45:49 rpi1 docker[1207]: An error occurred trying to connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.21/containers/k8s-worker-proxy/kill?signal=KILL: read unix @->/var/run/docker.sock: read: connection reset by peer
Dec 02 22:45:49 rpi1 docker[1207]: Error: failed to kill containers: [k8s-worker k8s-worker-proxy]
Dec 02 22:45:52 rpi1 docker[1217]: An error occurred trying to connect: read unix @->/var/run/docker.sock: read: connection reset by peer
Dec 02 22:45:54 rpi1 docker[1217]: An error occurred trying to connect: read unix @->/var/run/docker.sock: read: connection reset by peer
Dec 02 22:45:54 rpi1 docker[1217]: Error: failed to remove containers: [k8s-worker k8s-worker-proxy]

Seems I need to make:

systemctl daemon-reload
systemctl start flannel

From this point, seems docker and k8s-worker is restarting and working well.

Test Results - V 0.5.8 - Cubietruck

Not only to open issues but also track what works well :)

Will update it over the week-end.

On Cubietruck:

  • Write to sdcard: OK
  • kube-config install process: OK
  • kube-config enable-master: OK

I have as output:

[root@thanos ~]# docker ps
CONTAINER ID        IMAGE                       COMMAND                CREATED             STATUS              PORTS               NAMES
f7f6d35699b1        kubernetesonarm/hyperkube   "/hyperkube schedule   2 minutes ago       Up 2 minutes                            k8s_scheduler.e5efe276_k8s-master-192.168.8.100_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_da0c9c8a            
fff8c1807c96        kubernetesonarm/hyperkube   "/hyperkube apiserve   2 minutes ago       Up 2 minutes                            k8s_apiserver.c358020f_k8s-master-192.168.8.100_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_dca5fe9f            
a62c47d12e76        kubernetesonarm/hyperkube   "/hyperkube controll   2 minutes ago       Up 2 minutes                            k8s_controller-manager.7047e990_k8s-master-192.168.8.100_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_441ccae3   
4a4ad3f6c696        kubernetesonarm/pause       "/pause"               2 minutes ago       Up 2 minutes                            k8s_POD.7ad6c339_k8s-master-192.168.8.100_kube-system_447c171dfac8ae64dc585f8e9cbfa7e6_966ab67b                  
4a78dccf9faf        kubernetesonarm/hyperkube   "/hyperkube kubelet    2 minutes ago       Up 2 minutes                            k8s-master                                                                                                       
d244bc1afb7e        kubernetesonarm/hyperkube   "/hyperkube proxy --   2 minutes ago       Up 2 minutes                            k8s-worker-proxy

Question : is it normal, I have twice each images, one in the latest stage and one in 0.5.5 ?

[root@thanos ~]# docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
kubernetesonarm/skydns        0.5.5               0a73ba8222b8        2 weeks ago         9.953 MB
kubernetesonarm/skydns        latest              0a73ba8222b8        2 weeks ago         9.953 MB
kubernetesonarm/flannel       latest              323b7a93d03b        2 weeks ago         90.25 MB
kubernetesonarm/flannel       0.5.5               323b7a93d03b        2 weeks ago         90.25 MB
kubernetesonarm/registry      0.5.5               9b888b2f2d5a        2 weeks ago         109 MB
kubernetesonarm/registry      latest              9b888b2f2d5a        2 weeks ago         109 MB
kubernetesonarm/exechealthz   0.5.5               8d67dee89c29        2 weeks ago         11.73 MB
kubernetesonarm/exechealthz   latest              8d67dee89c29        2 weeks ago         11.73 MB
kubernetesonarm/kube2sky      0.5.5               d7e9a144c99d        2 weeks ago         13.1 MB
kubernetesonarm/kube2sky      latest              d7e9a144c99d        2 weeks ago         13.1 MB
kubernetesonarm/pause         latest              22e22d3123c0        2 weeks ago         227 kB
kubernetesonarm/pause         0.5.5               22e22d3123c0        2 weeks ago         227 kB
kubernetesonarm/hyperkube     0.5.5               fd5c7305c597        2 weeks ago         125.5 MB
kubernetesonarm/hyperkube     latest              fd5c7305c597        2 weeks ago         125.5 MB
kubernetesonarm/etcd          latest              cc678c339590        2 weeks ago         18.2 MB
kubernetesonarm/etcd          0.5.5               cc678c339590        2 weeks ago         18.2 MB

I'll need to move current content of my cubietruck to a RPi1 so that I can use the former as a worker ndoe.

So far so good :)

enable-master breaks docker

Hey I'm trying to get just the master node up and running. I did kube-config install then kube-config enable-master and rebooted. Before enabling the master docker works fine, after it's broken. When I do a status I get the following, have you run into this?

[root@k8s-master ~]# systemctl start docker.service
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@k8s-master ~]# systemctl status docker.service
* docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           `-docker-flannel.conf
   Active: failed (Result: start-limit)
     Docs: https://docs.docker.com

Nov 20 15:36:27 k8s-master systemd[1]: Failed to start Docker Application Container Engine.
Nov 20 15:36:27 k8s-master systemd[1]: docker.service: Failed with result 'start-limit'.
Nov 20 15:37:52 k8s-master systemd[1]: docker.service: Failed to load environment files: No such file or directory
Nov 20 15:37:52 k8s-master systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Nov 20 15:37:52 k8s-master systemd[1]: Failed to start Docker Application Container Engine.
Nov 20 15:37:52 k8s-master systemd[1]: docker.service: Failed with result 'resources'.
Nov 20 15:37:55 k8s-master systemd[1]: docker.service: Failed to load environment files: No such file or directory
Nov 20 15:37:55 k8s-master systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Nov 20 15:37:55 k8s-master systemd[1]: Failed to start Docker Application Container Engine.
Nov 20 15:37:55 k8s-master systemd[1]: docker.service: Failed with result 'resources'.

Supporting rancher server and agents

Hey, just thinking if you could support rancher server and agent on arm.
Rancher has the ability to deploy and manage many other platforms like kubernates, hadoop etc very smoothly through their community supported catalog.
I know Rancheros support is your roadmap, but just thinking if the server and agents should be ported to arch or hypriotos, it could save a whole lot

install fails on hypriot

$ dpkg -i kube-systemd.deb
(Reading database ... 53240 files and directories currently installed.)
Preparing to unpack kube-systemd.deb ...
Unpacking kubernetes-on-arm (0.6.2-2) ...
dpkg: error processing archive kube-systemd.deb (--install):
trying to overwrite '/README.md', which is also in package occi 0.6.0
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)
Errors were encountered while processing:
kube-systemd.deb

HypriotOS: root@hostname in ~
$ uname -na
Linux hostname 4.1.12-hypriotos-v7+ #2 SMP PREEMPT Tue Nov 3 19:44:55 UTC 2015 armv7l GNU/Linux

HypriotOS: root@hostname in ~
$ cat /etc/*release
profile: hypriot
image_build: 20151115-132854
image_commit: ac0496d13f8034bbb13fb60af4d3988773aec860
kernel_build: 20151103-193133
kernel_commit: 4b796cf43834bf552a86cc267a97069edc8dfe08

PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)"
NAME="Raspbian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

Install fails on kube-config enable-worker [master-ip] - using Arch + RPI2

Hello, first off thanks for putting this together - it has provided a great learning experience over the holiday break after receiving a couple of RPI2's to tinker with! And a cheaper route than Google Cloud 👍

I encountered an issue when enabling the workers following a clean install of both the master and worker. It appears to be searching for 127.0.0.1 instead of the value based on the cli argument. This was run after the master came up successfully.

root@kube-slave01 ~]# kube-config enable-worker 192.168.0.20
Disabling k8s if it is running
Using master ip: 127.0.0.1
curl: (6) Could not resolve host: 
Kubernetes Master IP is required.
Value right now: 127.0.0.1
Exiting...

Command:
kube-config enable-worker [master-ip]
Checks so all images are present
Downloading Kubernetes docker images from Github
Transferring images to system-docker, if necessary
Copying kubernetesonarm/flannel to system-docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.

I was able to tweak the /usr/bin/kube-config file and achieve the expected behavior below. Stay tuned for a quick pull request.

[root@kube-slave02 ~]# kube-config enable-worker 192.168.0.20
Disabling k8s if it is running
Using master ip: 192.168.0.20
Checks so all images are present
Downloading Kubernetes docker images from Github
Transferring images to system-docker, if necessary
Copying kubernetesonarm/flannel to system-docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Starting worker components in docker containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-worker.service to /usr/lib/systemd/system/k8s-worker.service.
Kubernetes worker services enabled

CPU Usage on RPI1/Workers and Cubietruck/Master

Hi,

With no docker images running, I'm a little bit surprised of the load average and the CPU %age usage:

  • On Cubietruck / Master node : both CPU are between 80% to 100% and load average (htop) is between 6 and 8.
  • On CubieTruck / Worker node : both CPU are around 10% and load average is normal (betlow 1)
  • On RPI1 / Workernodes, CPU is between 66% and 100% and load average is > 1 (<2)

If only main processes takes so much resources, what can I expect in terms of hosting for containers ?

Do you see the same CPU / load average consumption ?

need to install bridge-utils?

/usr/lib/systemd/system/docker.service.d/docker-flannel.conf uses brctl, which is available in package bridge-utils. this isn't installed by default when i ran the script. is this a known issue?

thanx

kube-config enable worker issue

So I set up a 2nd RPI1, did not create the swap file and then go directly to: "kube-config enable worker"

kube-config enable-worker
Can't spin up Kubernetes without these images: kubernetesonarm/flannel kubernetesonarm/hyperkube kubernetesonarm/pause
Tries to pull them instead.
docker: "pull" requires 1 argument.
See 'docker pull --help'.

Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST]

Pull an image or a repository from the registry
Pull failed

So should I pull first:

  • kubernetesonarm/etcd:
  • kubernetesonarm/flannel:
  • kubernetesonarm/hyperkube:
  • kubernetesonarm/pause?

If yes, then it should not be in the "build the docker images for arm'

I don't have either pull your own image nor made the kube-config build-images / kube-config build-addons as from my understand, it's no longer required.

/usr/bin/kube-config is not executable

With 0.5.5 on Rasp V1, /usr/bin/kube-config is not executable

So it requires

chmod +x /usr/bin/kube-config

before being able to run kube-config install.

origin/location of kube-config

I just switched from k8s-on-rpi to this project but I am curious where the kube-config command is coming from to install master and worker nodes?

k8s-worker doest not initialise correctly

On mycubietruck, kube-config install runs well (I did not create the 1GB swap file) but when I run kube-config enable-worker:

  • it asked for the ip of the k8s master, which I provided
  • It pulled kubernetesonarm/* images
  • but then it said that job docker.service failed beauce a configured ressource limit was exceeded
  • and k8s-worker.service failed because the control process exited with error code
[root@thanos ~]# systemctl status docker -l
* docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           `-docker-flannel.conf
   Active: failed (Result: start-limit) since Sun 2015-10-25 00:32:36 CEST; 11h ago
     Docs: https://docs.docker.com
 Main PID: 391 (code=exited, status=0/SUCCESS)

Oct 25 00:32:48 thanos systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Oct 25 00:32:48 thanos systemd[1]: Failed to start Docker Application Container Engine.
Oct 25 00:32:48 thanos systemd[1]: docker.service: Failed with result 'resources'.
Oct 25 00:32:48 thanos systemd[1]: docker.service: Failed to load environment files: No such file or directory
Oct 25 00:32:48 thanos systemd[1]: docker.service: Failed to run 'start-pre' task: No such file or directory
Oct 25 00:32:48 thanos systemd[1]: Failed to start Docker Application Container Engine.
Oct 25 00:32:48 thanos systemd[1]: docker.service: Failed with result 'resources'.
Oct 25 00:32:48 thanos systemd[1]: docker.service: Start request repeated too quickly.
Oct 25 00:32:48 thanos systemd[1]: Failed to start Docker Application Container Engine.
Oct 25 00:32:48 thanos systemd[1]: docker.service: Failed with result 'start-limit'.

and :

[root@thanos ~]# systemctl status k8s-worker -l
* k8s-worker.service - The Master Components for Kubernetes
   Loaded: loaded (/usr/lib/systemd/system/k8s-worker.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Sun 2015-10-25 11:05:55 CET; 4s ago
  Process: 19120 ExecStop=/usr/bin/docker stop k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
  Process: 19109 ExecStartPost=/usr/bin/docker run -d --name=k8s-worker-proxy --net=host --privileged kubernetesonarm/hyperkube /hyperkube proxy --master=http://${K8S_MASTER_IP}:8080 --v=2 (code=exited, status=1/FAILURE)
  Process: 19108 ExecStart=/bin/bash -c source /etc/kubernetes/k8s.conf; exec docker run --name=k8s-worker --net=host -v /var/run/docker.sock:/var/run/docker.sock kubernetesonarm/hyperkube /hyperkube kubelet --pod_infra_container_image=kubernetesonarm/pause --api-servers=http://${K8S_MASTER_IP}:8080 --v=2 --address=127.0.0.1 --enable-server --hostname-override=$(/usr/bin/hostname -i | /usr/bin/awk '{print $1}') --cluster-dns=10.0.0.10 --cluster-domain=cluster.local (code=exited, status=1/FAILURE)
  Process: 19101 ExecStartPre=/usr/bin/docker rm k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
  Process: 19096 ExecStartPre=/usr/bin/docker kill k8s-worker k8s-worker-proxy (code=exited, status=1/FAILURE)
 Main PID: 19108 (code=exited, status=1/FAILURE)

Oct 25 11:05:55 thanos systemd[1]: Failed to start The Master Components for Kubernetes.
Oct 25 11:05:55 thanos systemd[1]: k8s-worker.service: Unit entered failed state.
Oct 25 11:05:55 thanos systemd[1]: k8s-worker.service: Failed with result 'exit-code'.

When I look quickly at the output :

  • DNS can't be 10.0.0.10 in my context
  • Port 8080 on my k8s master is not opened

btw, when I did kube-config enable-master on my RPI1, there were no special output. So I don't know if it's working well or not. How can I track this ?

Thanks !

Write.sh output - default choice & tar mentions "ignoring unknown extended header keyword"

Hi,

Two comments on the write.sh output:

  • If Y is upper case, then it should be the default choice. If you don"t type y or Y, program exists instead of going further by default.
sudo sdcard/write.sh /dev/sdc rpi archlinux kube-archlinux
You are going to lose all your data on /dev/sdc. Continue? [Y/n]
Quitting...
sudo sdcard/write.sh /dev/sdc rpi archlinux kube-archlinux
You are going to lose all your data on /dev/sdc. Continue? [Y/n]y
OK. Continuing...
  • For the 2nd point, are the tar mentions about ignoring an extended header an issue or not ?
sudo sdcard/write.sh /dev/sdc rpi archlinux kube-archlinux
You are going to lose all your data on /dev/sdc. Continue? [Y/n]y
OK. Continuing...
Now /dev/sdc is going to be partitioned

Bienvenue dans fdisk (util-linux 2.27).
Les modifications resteront en mémoire jusqu'à écriture.
Soyez prudent avant d'utiliser la commande d'écriture.


Commande (m pour l'aide) : Création d'une nouvelle étiquette pour disque de type DOS avec identifiant de disque 0xbc2937bf.

Commande (m pour l'aide) : Disque /dev/sdc : 3,7 GiB, 3904897024 octets, 7626752 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0xbc2937bf

Commande (m pour l'aide) : Type de partition
   p   primaire (0 primaire, 0 étendue, 4 libre)
   e   étendue (conteneur pour partitions logiques)
Sélectionnez (p par défaut) : Numéro de partition (1-4, 1 par défaut) : Premier secteur (2048-7626751, 2048 par défaut) : Dernier secteur, +secteurs ou +taille{K,M,G,T,P} (2048-7626751, 7626751 par défaut) : 
Une nouvelle partition 1 de type « Linux » et de taille 100 MiB a été créée.

Commande (m pour l'aide) : Partition 1 sélectionnée
Type de partition (taper L pour afficher tous les types) : Type de partition « Linux » modifié en « W95 FAT32 (LBA) ».

Commande (m pour l'aide) : Type de partition
   p   primaire (1 primaire, 0 étendue, 3 libre)
   e   étendue (conteneur pour partitions logiques)
Sélectionnez (p par défaut) : Numéro de partition (2-4, 2 par défaut) : Premier secteur (206848-7626751, 206848 par défaut) : Dernier secteur, +secteurs ou +taille{K,M,G,T,P} (206848-7626751, 7626751 par défaut) : 
Une nouvelle partition 2 de type « Linux » et de taille 3,6 GiB a été créée.

Commande (m pour l'aide) : La table de partitions a été altérée.
Appel d'ioctl() pour relire la table de partitions.
Synchronisation des disques.

mkfs.fat 3.0.28 (2015-05-16)
mke2fs 1.42.13 (17-May-2015)
En train de créer un système de fichiers avec 927488 4k blocs et 232000 i-noeuds.
UUID de système de fichiers=db5cddad-51f9-4a26-9000-40e04c548260
Superblocs de secours stockés sur les blocs : 
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocation des tables de groupe : complété                        
Écriture des tables d'i-noeuds : complété                        
Création du journal (16384 blocs) : complété
Écriture des superblocs et de l'information de comptabilité du système de
fichiers : complété

Partitions mounted
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.capability'
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.capability'
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.capability'
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.capability'
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.xattr.security.capability'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
OS written to SD Card
nsteinmetz@swiip:~/Téléchargements/kubernetes-on-arm$ 

Thanks,
Nicolas

Only one volume being mounted?

Hey, I have a following config:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv0001
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/volumes/first"

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv0002
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/volumes/second"

---
Kind: Pod
...
      volumeMounts:
      - mountPath: "/etc/sudoers.d"
        name: mypv1
      volumeMounts:
      - mountPath: "/var/lib"
        name: mypv2
  volumes:
    - name: mypv1
      persistentVolumeClaim:
       claimName: myclaim1
    - name: mypv2
      persistentVolumeClaim:
       claimName: myclaim2

kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv0001 type=local 1Gi RWO Bound default/myclaim2 6s
pv0002 type=local 1Gi RWO Bound default/myclaim1 6s

kubectl get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim1 Bound pv0002 1Gi RWO 55s
myclaim2 Bound pv0001 1Gi RWO 54s

FIrst thing that doesn't work when we have volumes attached is kube exec:

kubectl exec -t -i volume-test /bin/bash
error: unexpected response status code 500 (Internal Server Error)

But when I exec directly from docker, here's what I have:

mounts:

    {
        "Name": "f1736e7aa452778344c10acb0df3ae08ab5283136a7b59d6d26d39a58993bd73",
        "Source": "/media/docker/volumes/f1736e7aa452778344c10acb0df3ae08ab5283136a7b59d6d26d39a58993bd73/_data",
        "Destination": "/etc/sudoers.d",
        "Driver": "local",
        "Mode": "",
        "RW": true
    },
    {
        "Source": "/volumes/first",
        "Destination": "/var/lib",
        "Mode": "",
        "RW": true
    }

So, first volume is being mounted to a docker directory, second volume is actually a mount, not a volume, but seems to be ok. Could somebody test same please? Just create two volumes, two claims, and attach to docker.

Hypriot as an alternative to Arch?

Hi,

I think you may know the project but Hypriot provides docker (+compose + swarm + machine) based on Debian Wheezy/Jessie.

Don't know if it could have some interest for your project.

0.6.3

Did you not change the package version in the .deb file? After downloading the 0.6.3 and installing it, dpkg -l still shows version 0.6.2-2

ii kubernetes-on-arm 0.6.2-2 armhf Kubernetes for ARM devices

kube-config enable-master failed

Hello,
Awesome project! Thanks for all your hard work.
I am using this for my school project. I have two Raspberry Pi2 Model B and trying to make a cluster.
But when I tried to start the master node I got this error:

[root@pi1 ~]# kube-config enable-master
Disabling k8s if it is running
Removed symlink /etc/systemd/system/multi-user.target.wants/k8s-master.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/flannel.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/etcd.service.
Checks so all images are present
Transferring images to system-docker, if necessary
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.
Job for docker.service failed because a configured resource limit was exceeded. See "systemctl status docker.service" and "journalctl -xe" for details.
Starting the master containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-master.service to /usr/lib/systemd/system/k8s-master.service.
Job for k8s-master.service failed because the control process exited with error code. See "systemctl status k8s-master.service" and "journalctl -xe" for details.
Master Kubernetes services enabled

Status

[root@pi1 ~]# systemctl status etcd.service
* etcd.service - Etcd Master Data Store for Kubernetes Apiserver
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2015-11-26 10:06:27 PST; 1min 1s ago
 Main PID: 9893 (code=exited, status=1/FAILURE)

Nov 26 10:06:23 pi1 docker[9893]: operation not permitted
Nov 26 10:06:24 pi1 docker[9893]: Error response from daemon: Cannot start container 30d6de2580837a1d3c647f584d66a6133cdec75538002813ee25834f494a820a: [8] System erro...ot permitted
Nov 26 10:06:24 pi1 systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 10:06:26 pi1 bash[9894]: operation not permitted
Nov 26 10:06:26 pi1 bash[9894]: Error response from daemon: Cannot start container c7e7913e70fa36783947042a0eeb9a66dfdbb9fe2d0e116565ca2a68a59f5519: [8] System error:...ot permitted
Nov 26 10:06:26 pi1 systemd[1]: etcd.service: Control process exited, code=exited status=1
Nov 26 10:06:27 pi1 docker[9937]: k8s-etcd
Nov 26 10:06:27 pi1 systemd[1]: Failed to start Etcd Master Data Store for Kubernetes Apiserver.
Nov 26 10:06:27 pi1 systemd[1]: etcd.service: Unit entered failed state.
Nov 26 10:06:27 pi1 systemd[1]: etcd.service: Failed with result 'exit-code'.
Hint: Some lines were ellipsized, use -l to show in full.
[root@pi1 ~]# journalctl -xe
Nov 26 10:09:14 pi1 docker[9974]: E1126 18:09:14.496914 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:15 pi1 docker[9974]: E1126 18:09:15.498819 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:16 pi1 docker[9974]: E1126 18:09:16.500722 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:17 pi1 docker[9974]: E1126 18:09:17.502528 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:18 pi1 docker[9974]: E1126 18:09:18.504369 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:19 pi1 systemd[1]: k8s-master.service: Service hold-off time over, scheduling restart.
Nov 26 10:09:19 pi1 systemd[1]: Stopped The Master Components for Kubernetes.
-- Subject: Unit k8s-master.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit k8s-master.service has finished shutting down.
Nov 26 10:09:19 pi1 systemd[1]: Starting The Master Components for Kubernetes...
-- Subject: Unit k8s-master.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit k8s-master.service has begun starting up.
Nov 26 10:09:19 pi1 docker[9974]: E1126 18:09:19.506442 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:19 pi1 docker[11104]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:19 pi1 docker[11104]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:19 pi1 docker[11104]: Error: failed to kill containers: [k8s-master k8s-worker-proxy]
Nov 26 10:09:19 pi1 docker[11113]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:19 pi1 docker[11113]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:19 pi1 docker[11113]: Error: failed to remove containers: [k8s-master k8s-worker-proxy]
Nov 26 10:09:20 pi1 docker[11124]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:20 pi1 systemd[1]: k8s-master.service: Control process exited, code=exited status=1
Nov 26 10:09:20 pi1 docker[11137]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:20 pi1 docker[11137]: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Nov 26 10:09:20 pi1 docker[11137]: Error: failed to stop containers: [k8s-master k8s-worker-proxy]
Nov 26 10:09:20 pi1 systemd[1]: k8s-master.service: Control process exited, code=exited status=1
Nov 26 10:09:20 pi1 systemd[1]: Failed to start The Master Components for Kubernetes.
-- Subject: Unit k8s-master.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit k8s-master.service has failed.
--
-- The result is failed.
Nov 26 10:09:20 pi1 systemd[1]: k8s-master.service: Unit entered failed state.
Nov 26 10:09:20 pi1 systemd[1]: k8s-master.service: Failed with result 'exit-code'.
Nov 26 10:09:20 pi1 docker[9974]: E1126 18:09:20.509224 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:21 pi1 docker[9974]: E1126 18:09:21.511007 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:22 pi1 docker[9974]: E1126 18:09:22.512882 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
Nov 26 10:09:23 pi1 docker[9974]: E1126 18:09:23.514737 00001 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured

Version info

[root@pi1 ~]# docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.5.1
 Git commit:   a34a1d5-dirty
 Built:        Mon Nov 23 14:47:45 UTC 2015
 OS/Arch:      linux/arm
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[root@pi1 ~]# kube-config info
Architecture: armv7l
Kernel: Linux 4.1.13
CPU: 4 cores x 900 MHz

Used RAM Memory: 45 MiB
RAM Memory: 923 MiB

Used disk space: 3.2GB (3265776 KB)
Free disk space: 3.6GB (3741908 KB)

SD Card was built: 04-11-2015 21:17

kubernetes-on-arm:
Latest commit: ccea1ef
Version: 0.5.8

systemd version: 227

I found similar issue in one of the issue threads, but it was kind of hard to follow. So, I thought it's good to start a new thread.

Thanks for your help in advance.

Docker hello world error

Getting this error while trying to run docker hello-world:

exec format error
Error response from daemon: Cannot start container 977d87157e8c34e70c1ae8af399a7dd3a41623b991a907ba969011b6ef6f5e54: [8] System error: exec format error

Any idea why ?

kube-config install checking

There should probably be some checking in the kube-config install script. If you happen to make a typo, rather than tell you that you've picked an invalid option, it just completes then errors out. Example:

Which OS do you have? Options: [archlinux, hypriotos, systemd]. hypriot
/usr/bin/kube-config: line 103: /etc/kubernetes/dynamic-env/hypriot.sh: No such file or directory

write.sh only manage /dev/sdX but not /dev/mmcblk0

Hi,

My sdcard is seen as /dev/mmcblk0 when using my internal sdcard reader. so write.sh does not work in this context as partitions should be named /dev/mmcblk0p1, /dev/mmcblk0p2, etc.

Output is in French but idea is that /dev/mmcblk01 and /dev/mmcblk02 does not exist as not correclty named. So I'll use my external sdcard reader to have /dev/sdX to go further ;-)

Btw, you need first to make write.sh executable (via chmod +x for example)

sudo sdcard/write.sh /dev/mmcblk0 rpi archlinux kube-archlinux
You are going to lose all your data on /dev/mmcblk0. Continue? [Y/n]y
OK. Continuing...
Now /dev/mmcblk0 is going to be partitioned

Bienvenue dans fdisk (util-linux 2.27).
Les modifications resteront en mémoire jusqu'à écriture.
Soyez prudent avant d'utiliser la commande d'écriture.


Commande (m pour l'aide) : Création d'une nouvelle étiquette pour disque de type DOS avec identifiant de disque 0x9114f8ad.

Commande (m pour l'aide) : Disque /dev/mmcblk0 : 3,7 GiB, 3904897024 octets, 7626752 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x9114f8ad

Commande (m pour l'aide) : Type de partition
   p   primaire (0 primaire, 0 étendue, 4 libre)
   e   étendue (conteneur pour partitions logiques)
Sélectionnez (p par défaut) : Numéro de partition (1-4, 1 par défaut) : Premier secteur (2048-7626751, 2048 par défaut) : Dernier secteur, +secteurs ou +taille{K,M,G,T,P} (2048-7626751, 7626751 par défaut) : 
Une nouvelle partition 1 de type « Linux » et de taille 100 MiB a été créée.

Commande (m pour l'aide) : Partition 1 sélectionnée
Type de partition (taper L pour afficher tous les types) : Type de partition « Linux » modifié en « W95 FAT32 (LBA) ».

Commande (m pour l'aide) : Type de partition
   p   primaire (1 primaire, 0 étendue, 3 libre)
   e   étendue (conteneur pour partitions logiques)
Sélectionnez (p par défaut) : Numéro de partition (2-4, 2 par défaut) : Premier secteur (206848-7626751, 206848 par défaut) : Dernier secteur, +secteurs ou +taille{K,M,G,T,P} (206848-7626751, 7626751 par défaut) : 
Une nouvelle partition 2 de type « Linux » et de taille 3,6 GiB a été créée.

Commande (m pour l'aide) : La table de partitions a été altérée.
Appel d'ioctl() pour relire la table de partitions.
Synchronisation des disques.

which: no mkfs.vfat in (/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/sbin:/usr/sbin)
résolution des dépendances...
recherche des conflits entre paquets...

Paquets (1) dosfstools-3.0.28-1

Taille totale du téléchargement :  0,07 MiB
Taille totale installée :         0,23 MiB

:: Procéder à l’installation ? [O/n] 
:: Récupération des paquets...
 dosfstools-3.0.28-1...    75,9 KiB  4,36M/s 00:00 [######################] 100%
(1/1) vérification des clés dans le trousseau      [######################] 100%
(1/1) vérification de l’intégrité des paquets      [######################] 100%
(1/1) chargement des fichiers des paquets          [######################] 100%
(1/1) analyse des conflits entre fichiers          [######################] 100%
(1/1) vérification de l’espace disque disponible   [######################] 100%
(1/1) installation de dosfstools                   [######################] 100%
mkfs.fat 3.0.28 (2015-05-16)
/dev/mmcblk01: No such file or directory
mount: le périphérique spécial /dev/mmcblk01 n'existe pas
mke2fs 1.42.13 (17-May-2015)
Le fichier /dev/mmcblk02 n'existe pas et aucune taille n'a été spécifiée.
mount: le périphérique spécial /dev/mmcblk02 n'existe pas
nsteinmetz@swiip:~/Téléchargements/kubernetes-on-arm$ 

Hostname for master

Hey can the master be connected to by its Hostname.

For instance :
To enable the worker service, run
kube-config enable-worker [master-ip or hostname]

kube2sky and skydns won't work

I got

{"log":"2016/01/03 10:46:01 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [5]\n","stream":"stderr","time":"2016-01-03T10:46:01.494293735Z"}

and none of hosts would register with DNS, any idea on how to debug it ?

kube-config delete-data - strange message

Hi,

I tested the delete-data and even if it seems to work, as I got the doc of umount, seems somethinkg was not done correctly.

[root@kube1 ~]# kube-config delete-data
Do you want to delete all Kubernetes data about this cluster? m(ove) is default, which moves the directories to {,old}. y deletes them and n exits [M/n/y] y

Usage:
 umount [-hV]
 umount -a [options]
 umount [options] <source> | <directory>

Unmount filesystems.

Options:
 -a, --all               unmount all filesystems
 -A, --all-targets       unmount all mountpoints for the given device in the
                           current namespace
 -c, --no-canonicalize   don't canonicalize paths
 -d, --detach-loop       if mounted loop device, also free this loop device
     --fake              dry run; skip the umount(2) syscall
 -f, --force             force unmount (in case of an unreachable NFS system)
 -i, --internal-only     don't call the umount.<type> helpers
 -n, --no-mtab           don't write to /etc/mtab
 -l, --lazy              detach the filesystem now, clean up things later
 -O, --test-opts <list>  limit the set of filesystems (use with -a)
 -R, --recursive         recursively unmount a target with all its children
 -r, --read-only         in case unmounting fails, try to remount read-only
 -t, --types <list>      limit the set of filesystem types
 -v, --verbose           say what is being done

 -h, --help     display this help and exit
 -V, --version  output version information and exit

For more details see umount(8).
Deleted all Kubernetes data

error building

I pull the repo (master branch = 0.5.8?) , and followed the instructions.
wrote the SD card from a ubuntu distro.

I'm getting an error building hypercube

Building: kubernetesonarm/build
Sending build context to Docker daemon 9.728 kB
Sending build context to Docker daemon
Step 0 : FROM luxas/go
 ---> 65e1d60afb17
Step 1 : COPY inbuild.sh version.sh /
 ---> 3e841df0eb1e
Removing intermediate container 9ee57d74055c
Step 2 : RUN ./inbuild.sh
 ---> Running in 03640d1cfa71
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   645  100   645    0     0    112      0  0:00:05  0:00:05 --:--:--   199
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   368  100   368    0     0     64      0  0:00:05  0:00:05 --:--:--    87
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   500  100   500    0     0     86      0  0:00:05  0:00:05 --:--:--   117
patching file etcdserver/raft.go
patching file etcdserver/server.go
Hunk #1 succeeded at 114 (offset 1 line).
patching file store/watcher_hub.go
fatal: Not a git repository (or any of the parent directories): .git
Building flanneld...
+++ [1114 01:52:01] Building go targets for linux/arm:
    cmd/hyperkube
    cmd/kubectl
# k8s.io/kubernetes/cmd/hyperkube
/goroot/pkg/tool/linux_arm/5l: running gcc failed: Cannot allocate memory
!!! Error in /build/kubernetes/hack/lib/golang.sh:383
  'go install "${goflags[@]:+${goflags[@]}}" -ldflags "${version_ldflags}" "${nonstatics[@]:+${nonstatics[@]}}"' exited with status 2
Call stack:
  1: /build/kubernetes/hack/lib/golang.sh:383 kube::golang::build_binaries_for_platform(...)
  2: /build/kubernetes/hack/lib/golang.sh:521 kube::golang::build_binaries(...)
  3: ./hack/build-go.sh:26 main(...)
Exiting with status 1
!!! Error in /build/kubernetes/hack/lib/golang.sh:439
  '( kube::golang::setup_env; local version_ldflags; version_ldflags=$(kube::version::ldflags); local host_platform; host_platform=$(kube::golang::host_platform); local goflags; eval "goflags=(${KUBE_GOFLAGS:-})"; local use_go_build; local -a targets=(); local arg; for arg in "$@";
do
    if [[ "${arg}" == "--use_go_build" ]]; then
        use_go_build=true;
    else
        if [[ "${arg}" == -* ]]; then
            goflags+=("${arg}");
        else
            targets+=("${arg}");
        fi;
    fi;
done; if [[ ${#targets[@]} -eq 0 ]]; then
    targets=("${KUBE_ALL_TARGETS[@]}");
fi; local -a platforms=("${KUBE_BUILD_PLATFORMS[@]:+${KUBE_BUILD_PLATFORMS[@]}}"); if [[ ${#platforms[@]} -eq 0 ]]; then
    platforms=("${host_platform}");
fi; local binaries; binaries=($(kube::golang::binaries_from_targets "${targets[@]}")); local parallel=false; if [[ ${#platforms[@]} -gt 1 ]]; then
    local gigs; gigs=$(kube::golang::get_physmem); if [[ ${gigs} -gt ${KUBE_PARALLEL_BUILD_MEMORY} ]]; then
        kube::log::status "Multiple platforms requested and available ${gigs}G > threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in parallel"; parallel=true;
    else
        kube::log::status "Multiple platforms requested, but available ${gigs}G < threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in serial"; parallel=false;
    fi;
fi; if [[ "${parallel}" == "true" ]]; then
    kube::log::status "Building go targets for ${platforms[@]} in parallel (output will appear in a burst when complete):" "${targets[@]}"; local platform; for platform in "${platforms[@]}";
    do
        ( kube::golang::set_platform_envs "${platform}"; kube::log::status "${platform}: go build started"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; kube::log::status "${platform}: go build finished" ) &> "/tmp//${platform//\//_}.build" &
    done; local fails=0; for job in $(jobs -p);
    do
        wait ${job} || let "fails+=1";
    done; for platform in "${platforms[@]}";
    do
        cat "/tmp//${platform//\//_}.build";
    done; exit ${fails};
else
    for platform in "${platforms[@]}";
    do
        kube::log::status "Building go targets for ${platform}:" "${targets[@]}"; kube::golang::set_platform_envs "${platform}"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-};
    done;
fi )' exited with status 1
Call stack:
  1: /build/kubernetes/hack/lib/golang.sh:439 kube::golang::build_binaries(...)
  2: ./hack/build-go.sh:26 main(...)
Exiting with status 1
cp: cannot stat '_output/local/bin/linux/arm/*': No such file or directory
+ go build --ldflags '-extldflags "-static" -s' pause.go
+ go get github.com/pwaller/goupx
+ goupx pause
2015/11/14 01:54:26 {Class:ELFCLASS32 Data:ELFDATA2LSB Version:EV_CURRENT OSABI:ELFOSABI_NONE ABIVersion:0 ByteOrder:LittleEndian Type:ET_EXEC Machine:EM_ARM Entry:284028}
2015/11/14 01:54:26 File fixed!
                       Ultimate Packer for eXecutables
                          Copyright (C) 1996 - 2013
UPX 3.91        Markus Oberhumer, Laszlo Molnar & John Reiser   Sep 30th 2013

        File size         Ratio      Format      Name
   --------------------   ------   -----------   -----------
    632584 ->    227064   35.89%   linux/armel   pause

Packed 1 file.

then when buildign the hyperkube image it says the exe is missing.

i haven't do much go beside reading through the k8s code to understand stuff, so I'm not sure what this all means.

any idea?

Thanks for help.

Get logs from node

Just curious if you are able to get logs (i.e. kubectl logs -f <pod>). I'm getting connection refused. Just an fyi, I am working with hypriot now, and copying out your config / images.

Error from server: Get https://k8s-node03:10250/containerLogs/default/my-nginx-5dgq1/my-nginx?follow=true: dial tcp 192.168.2.13:10250: connection refused

I did update local hosts file to have the node ips, but no luck there.

Mix of Rpi1/Rpi2+CubieTruck (or armv6 vs armv7)

Hi,

Did you try a mix of Rpi1 (armv6) and Rpi2/Cubietruck (armv7) on a kubernetes cluster ? I was thinking of using RPi1 as master and RPI2 as worker nodes so that only armv7 images are run on the Rpi2/CubieTruck.

Or should we have only armv7 k8s cluster on one side and armv6 k8s cluster on the other side ?

I should test your project soon. Do you think it could be compatible with CubieTruck boards top ?

Pulling Logs from Pod Fails - using Arch + RPI2

Hello. One more quick multinode fix / pull request on the way. Ran into a hiccup while attempting to view logs in a multi pi setup.

root@kube-master redis-to-mysql]# kubectl logs twitter-to-redis-fycg3
Error from server: Get https://192.168.0.22:10250/containerLogs/default/twitter-to-redis-fycg3/twitter-to-redis: dial tcp 192.168.0.22:10250: connection refused

Checking the allocated ports on the worker, it looked like it was binding to 127.0.0.1:10250 instead of all ports and/or the external IP.

[root@kube-slave02 ~]# netstat -antp            
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10255         0.0.0.0:*               LISTEN      5055/hyperkube      
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      220/sshd            
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      5055/hyperkube      
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      5050/hyperkube      
tcp        0      0 127.0.0.1:10250         0.0.0.0:*               LISTEN      5055/hyperkube      
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      214/systemd-resolve 
tcp        0      0 192.168.0.22:57287      192.168.0.20:8080       ESTABLISHED 5055/hyperkube      
tcp        0      0 192.168.0.22:55814      192.168.0.20:4001       ESTABLISHED 4918/flanneld       
tcp        0      0 192.168.0.22:57288      192.168.0.20:8080       ESTABLISHED 5055/hyperkube      
tcp        0      0 192.168.0.22:41254      192.168.0.20:8080       ESTABLISHED 5055/hyperkube      
tcp        0    232 192.168.0.22:22         192.168.0.10:58958      ESTABLISHED 5115/sshd: root@pts 
tcp        0      0 192.168.0.22:42359      192.168.0.20:8080       ESTABLISHED 5050/hyperkube      
tcp        0      0 192.168.0.22:42358      192.168.0.20:8080       ESTABLISHED 5050/hyperkube      
tcp6       0      0 :::22                   :::*                    LISTEN      220/sshd            
tcp6       0      0 :::4194                 :::*                    LISTEN      5055/hyperkube      
tcp6       0      0 :::5355                 :::*                    LISTEN      214/systemd-resolve 

Poking around, I then read through this doc and made a couple of tweaks to the service proxy option startup on the worker node.

cat /usr/lib/systemd/system/k8s-worker.service

[Unit]
Description=The Worker Components for Kubernetes
After=docker.service

[Service]
ExecStartPre=-/usr/bin/docker kill k8s-worker k8s-worker-proxy
ExecStartPre=-/usr/bin/docker rm k8s-worker k8s-worker-proxy
ExecStartPre=-/bin/sh -c "mkdir -p /etc/kubernetes/static/worker"
ExecStart=/bin/sh -c "exec docker run                                                                                   \
                                        --name=k8s-worker                                                                                                       \
                                        --net=host                                                                                                                      \
                                        -v /etc/kubernetes/static/worker:/etc/kubernetes/manifests          \
                                        -v /:/rootfs:ro                                                                                                         \
                                        -v /sys:/sys:ro                                                                                                         \
                                        -v /dev:/dev                                                                                                            \
                                        -v /var/lib/docker/:/var/lib/docker:rw                                                          \
                                        -v /var/lib/kubelet:/var/lib/kubelet:rw                                                         \
                                        -v /var/run:/var/run:rw                                                                                         \
                                        --privileged=true                                                                               \
                                        --pid=host                                                                                                                      \
                                        kubernetesonarm/hyperkube /hyperkube kubelet                        \
                                                --allow-privileged=true                                                                                 \
                                                --containerized                                                                                                 \
                                                --pod_infra_container_image=kubernetesonarm/pause               \
                                                --api-servers=http://${K8S_MASTER_IP}:8080                      \
                                                --cluster-dns=10.0.0.10                                                                                 \
                                                --cluster-domain=cluster.local                                                                  \
                                                --v=2                                                                                                                   \
                                                --address=0.0.0.0                                                                                               \
                                                --enable-server                                                                                                 \
                                                --hostname-override=$(ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1) \
                                                --config=/etc/kubernetes/manifests"

ExecStartPost=/usr/bin/docker run -d                                                                                                    \
                                                --name=k8s-worker-proxy                                                                                 \
                                                --net=host                                                                                                              \
                                                --privileged=true                                                                       \
                                                kubernetesonarm/hyperkube /hyperkube proxy                      \
                                                        --master=http://${K8S_MASTER_IP}:8080                       \
                                                        --v=2
ExecStop=/usr/bin/docker stop k8s-worker k8s-worker-proxy
EnvironmentFile=/etc/kubernetes/k8s.conf
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Then cycled the worker:

[root@kube-slave03 ~]# kube-config disable-node
Removed symlink /etc/systemd/system/multi-user.target.wants/flannel.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/k8s-worker.service.
[root@kube-slave03 ~]# kube-config enable-worker 192.168.0.20
Disabling k8s if it is running
Using master ip: 192.168.0.20
Checks so all images are present
Transferring images to system-docker, if necessary
Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
Starting worker components in docker containers
Created symlink from /etc/systemd/system/multi-user.target.wants/k8s-worker.service to /usr/lib/systemd/system/k8s-worker.service.
Kubernetes worker services enabled

After the restart, it was looking like port 10250 was binding to all:

[root@kube-slave01 ~]# netstat -antp 
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      212/sshd            
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      9682/hyperkube      
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      9676/hyperkube      
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      211/systemd-resolve 
tcp        0      0 192.168.0.21:54225      192.168.0.20:8080       ESTABLISHED 9682/hyperkube      
tcp        0      0 192.168.0.21:54227      192.168.0.20:8080       ESTABLISHED 9682/hyperkube      
tcp        0    232 192.168.0.21:22         192.168.0.10:60933      ESTABLISHED 260/sshd: root@pts/ 
tcp        0      0 192.168.0.21:54226      192.168.0.20:8080       ESTABLISHED 9682/hyperkube      
tcp        0      0 192.168.0.21:54404      192.168.0.20:8080       ESTABLISHED 9676/hyperkube      
tcp        0      0 192.168.0.21:55793      192.168.0.20:4001       ESTABLISHED 9549/flanneld       
tcp        0      0 192.168.0.21:54405      192.168.0.20:8080       ESTABLISHED 9676/hyperkube      
tcp6       0      0 :::10255                :::*                    LISTEN      9682/hyperkube      
tcp6       0      0 :::22                   :::*                    LISTEN      212/sshd            
tcp6       0      0 :::4194                 :::*                    LISTEN      9682/hyperkube      
tcp6       0      0 :::45891                :::*                    LISTEN      9676/hyperkube      
tcp6       0      0 :::10250                :::*                    LISTEN      9682/hyperkube      

And voila - it's happy ... yay!

root@kube-master cnbot-rpi]# kubectl logs redis-master-g4xil
1:C 29 Dec 22:31:45.866 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 29 Dec 22:31:45.879 # Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 3.0.0 (00000000/0) 32 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 1
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

1:M 29 Dec 22:31:45.882 # Server started, Redis version 3.0.0
1:M 29 Dec 22:31:45.883 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 29 Dec 22:31:45.883 * The server is now ready to accept connections on port 6379

I'll whip up a quick pull request to dev.

Cheers,
Kyle

question: how does it work?

Hello,

Awesome project!

I have a question before spending the time using it:
I deployed k8s on CoreOs before: one would setup a cluster, which uses etcd for discovery: you need 3 nodes+ to get started. Flannel then uses etcd for communication in the cluster, and then docker runs using flannel as its networking layer. Once docker runs then k8s can use it to deploy / orchestrate containers.

So i'm not sure to understand how this project works: you're running k8s inside docker, that means docker needs to be up before you even setup etcd, flannel, docker and then k8s on top.
Are you running docker in docker, with the lower level docker some priviledge container using the host networking layer?

The future of this project/Wishlist

I want this to be a kind of tracking issue for things the community (you) want from this project.
Note: I do not promise to make all features one may wish for/propose, but I would like to know what you think.

Also, feel free to comment here if you are using this project and it is working, that will probably give more fuel to this project :)

I'm working on merging this functionality into mainline Kubernetes, and for that feedback would be greatly appreciated, so the Kubernetes guys know which features count.
Here is the ARM tracking issue: kubernetes/kubernetes#17981

kube-ui not in DockerHub

When trying to use the kube-ui addon, I'm getting the follow error when the pod is trying to start:

Failed  Failed to pull image "kubernetesonarm/kube-ui": Error: image kubernetesonarm/kube-ui:latest not found

[dev] kube-config - symlink for /etc/kubernetes/binaries is broken

Symlink is false:

[root@thanos kubernetesonarm]# ls -al /etc/kubernetes/binaries 
lrwxrwxrwx 1 root root 57 Oct 25 19:40 /etc/kubernetes/binaries -> /etc/kubernetes/source/images/kubernetesonarm/_bin/latest

However, this directory does not contain a _bin directory:

[root@thanos kubernetesonarm]# pwd
/etc/kubernetes/source/images/kubernetesonarm
[root@thanos kubernetesonarm]# ls
build  etcd  exechealthz  flannel  hyperkube  kube-ui  kube2sky  pause  registry  skydns

New features and usage

@nsteinmetz
Have you used the project?
Any suggestions?

I've added some new features in this release, v0.5.5
Something more that should be added?
Next thing I'm going to compile is Kube-UI

Anyone else that has tested this?

Unable to start using Hypriot OS

I started with the latest 0.6.1 Hypriot OS and perhaps there is a compatibility issue but when trying to start the Kubelet with the latest compiled binaries here, the api server isn't accepting connections for some reason. I'm using Docker 1.9.1.

Here are the startup logs if anyone has any insights into why this might be:

black-pearl# sudo docker run -i -t --name=kube --net=host -v /var/run/docker.sock:/var/run/docker.sock kubernetesonarm/hyperkube /bin/bash
root@black-pearl:/# /hyperkube kubelet --pod_infra_container_image="kubernetesonarm/pause" --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable-server --config=/etc/kubernetes/manifests-multi
W1203 10:43:15.123409       6 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W1203 10:43:15.124034       6 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I1203 10:43:15.124816       6 plugins.go:71] No cloud provider specified.
I1203 10:43:15.125006       6 server.go:468] Successfully initialized cloud provider: "" from the config file: ""
I1203 10:43:15.125866       6 manager.go:128] cAdvisor running in container: "/system.slice/docker-0978411d3717bd4d48ab4461fbadfd784c5aa9a645b47240950481886756aa6e.scope"
I1203 10:43:16.474856       6 fs.go:108] Filesystem partitions: map[/dev/root:{mountpoint:/etc/resolv.conf major:179 minor:2 fsType: blockSize:0}]
E1203 10:43:16.498711       6 machine.go:126] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
E1203 10:43:16.499532       6 machine.go:86] Failed to get system UUID: open /proc/device-tree/vm,uuid: no such file or directory
I1203 10:43:16.531350       6 manager.go:163] Machine: {NumCores:4 CpuFrequency:900000 MemoryCapacity:970452992 MachineID:2d763621e59c44cbbaf0f8fd51e18e91 SystemUUID: BootID:655b09ab-2e4e-41ea-9224-d5481904e8c2 Filesystems:[{Device:/dev/root Capacity:29970837504}] DiskMap:map[179:0:{Name:mmcblk0 Major:179 Minor:0 Size:32026656768 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:b8:27:eb:40:30:33 Speed:100 Mtu:1500} {Name:wlan0 MacAddress:74:da:38:5c:6a:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[]} {Id:1 Threads:[1] Caches:[]} {Id:2 Threads:[2] Caches:[]} {Id:3 Threads:[3] Caches:[]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown}
I1203 10:43:16.537569       6 manager.go:169] Version: {KernelVersion:4.1.12-hypriotos-v7+ ContainerOsVersion:Raspbian GNU/Linux 8 (jessie) DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:}
I1203 10:43:16.544711       6 server.go:485] Using root directory: /var/lib/kubelet
W1203 10:43:16.547045       6 server.go:490] failed to set oom_score_adj to -999: write /proc/self/oom_score_adj: permission denied
I1203 10:43:16.556465       6 server.go:798] Adding manifest file: /etc/kubernetes/manifests-multi
I1203 10:43:16.557125       6 file.go:47] Watching path "/etc/kubernetes/manifests-multi"
I1203 10:43:16.561219       6 server.go:808] Watching apiserver
I1203 10:43:17.941575       6 plugins.go:56] Registering credential provider: .dockercfg
I1203 10:43:17.949930       6 plugins.go:262] Loaded volume plugin "kubernetes.io/aws-ebs"
I1203 10:43:17.950359       6 plugins.go:262] Loaded volume plugin "kubernetes.io/empty-dir"
I1203 10:43:17.950653       6 plugins.go:262] Loaded volume plugin "kubernetes.io/gce-pd"
I1203 10:43:17.950936       6 plugins.go:262] Loaded volume plugin "kubernetes.io/git-repo"
I1203 10:43:17.951372       6 plugins.go:262] Loaded volume plugin "kubernetes.io/host-path"
I1203 10:43:17.951665       6 plugins.go:262] Loaded volume plugin "kubernetes.io/nfs"
I1203 10:43:17.952300       6 plugins.go:262] Loaded volume plugin "kubernetes.io/secret"
I1203 10:43:17.952653       6 plugins.go:262] Loaded volume plugin "kubernetes.io/iscsi"
I1203 10:43:17.953033       6 plugins.go:262] Loaded volume plugin "kubernetes.io/glusterfs"
I1203 10:43:17.953378       6 plugins.go:262] Loaded volume plugin "kubernetes.io/persistent-claim"
I1203 10:43:17.954007       6 plugins.go:262] Loaded volume plugin "kubernetes.io/rbd"
I1203 10:43:17.954336       6 plugins.go:262] Loaded volume plugin "kubernetes.io/cinder"
I1203 10:43:17.954637       6 plugins.go:262] Loaded volume plugin "kubernetes.io/cephfs"
I1203 10:43:17.955051       6 plugins.go:262] Loaded volume plugin "kubernetes.io/downward-api"
I1203 10:43:17.955639       6 plugins.go:262] Loaded volume plugin "kubernetes.io/fc"
I1203 10:43:17.955963       6 plugins.go:262] Loaded volume plugin "kubernetes.io/flocker"
E1203 10:43:17.959349       6 kubelet.go:756] Image garbage collection failed: unable to find data for container /
W1203 10:43:17.971264       6 kubelet.go:775] Failed to move Kubelet to container "/kubelet": mkdir /sys/fs/cgroup/cpu/kubelet: read-only file system
I1203 10:43:17.971670       6 kubelet.go:777] Running in container "/kubelet"
E1203 10:43:17.975968       6 event.go:197] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
I1203 10:43:17.976475       6 server.go:72] Starting to listen on 0.0.0.0:10250
I1203 10:43:17.976920       6 server.go:757] Started kubelet
I1203 10:43:17.977057       6 server.go:89] Starting to listen read-only on 0.0.0.0:10255
I1203 10:43:18.147873       6 kubelet.go:2293] Recording NodeReady event message for node black-pearl
I1203 10:43:18.148279       6 kubelet.go:2293] Recording NodeHasSufficientDisk event message for node black-pearl
I1203 10:43:18.148539       6 kubelet.go:869] Attempting to register node black-pearl
I1203 10:43:18.156045       6 kubelet.go:872] Unable to register black-pearl with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused
I1203 10:43:18.470475       6 kubelet.go:2293] Recording NodeReady event message for node black-pearl
I1203 10:43:18.471410       6 kubelet.go:2293] Recording NodeHasSufficientDisk event message for node black-pearl
I1203 10:43:18.472090       6 kubelet.go:869] Attempting to register node black-pearl
I1203 10:43:18.478213       6 kubelet.go:872] Unable to register black-pearl with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused
I1203 10:43:18.966961       6 kubelet.go:2293] Recording NodeReady event message for node black-pearl
I1203 10:43:18.968500       6 kubelet.go:2293] Recording NodeHasSufficientDisk event message for node black-pearl
I1203 10:43:18.968975       6 kubelet.go:869] Attempting to register node black-pearl
I1203 10:43:18.975396       6 kubelet.go:872] Unable to register black-pearl with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused
I1203 10:43:19.857823       6 kubelet.go:2293] Recording NodeReady event message for node black-pearl
I1203 10:43:19.858466       6 kubelet.go:2293] Recording NodeHasSufficientDisk event message for node black-pearl
I1203 10:43:19.858713       6 kubelet.go:869] Attempting to register node black-pearl
I1203 10:43:19.864068       6 kubelet.go:872] Unable to register black-pearl with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused
I1203 10:43:20.088875       6 factory.go:236] Registering Docker factory
I1203 10:43:20.099986       6 factory.go:93] Registering Raw factory
E1203 10:43:20.849985       6 event.go:197] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
I1203 10:43:21.540537       6 kubelet.go:2293] Recording NodeReady event message for node black-pearl
I1203 10:43:21.541315       6 kubelet.go:2293] Recording NodeHasSufficientDisk event message for node black-pearl
I1203 10:43:21.541606       6 kubelet.go:869] Attempting to register node black-pearl
I1203 10:43:21.547994       6 kubelet.go:872] Unable to register black-pearl with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused
I1203 10:43:21.986156       6 manager.go:1006] Started watching for new ooms in manager
I1203 10:43:21.993452       6 oomparser.go:183] oomparser using systemd
I1203 10:43:22.021368       6 manager.go:250] Starting recovery of all containers
I1203 10:43:22.029635       6 manager.go:255] Recovery completed
I1203 10:43:22.058648       6 container_manager_linux.go:179] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
E1203 10:43:22.059471       6 kubelet.go:792] Failed to start ContainerManager, system may not be properly isolated: open /proc/sys/vm/overcommit_memory: read-only file system
I1203 10:43:22.059913       6 manager.go:104] Starting to sync pod status with apiserver
I1203 10:43:22.060161       6 kubelet.go:1953] Starting kubelet main sync loop.
I1203 10:43:22.060492       6 kubelet.go:2005] SyncLoop (ADD): "k8s-master-black-pearl_kube-system"
E1203 10:43:22.061326       6 kubelet.go:1908] error getting node: node 'black-pearl' is not in cache
I1203 10:43:22.091720       6 manager.go:1707] Need to restart pod infra container for "k8s-master-black-pearl_kube-system" because it is not found
W1203 10:43:22.097917       6 manager.go:108] Failed to updated pod status: error updating status for pod "k8s-master-black-pearl_kube-system": Get http://localhost:8080/api/v1/namespaces/kube-system/pods/k8s-master-black-pearl: dial tcp 127.0.0.1:8080: connection refused
E1203 10:43:22.997428       6 manager.go:1864] Failed to create pod infra container: open /proc/8748/cgroup: no such file or directory; Skipping pod "k8s-master-black-pearl_kube-system"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.