GithubHelp home page GithubHelp logo

cookeem / kubeadm-ha Goto Github PK

View Code? Open in Web Editor NEW
677.0 51.0 275.0 518 KB

通过kubeadm安装kubernetes高可用集群,使用docker/containerd容器运行时,适用v1.24.x以上版本

License: MIT License

kubernetes kubeadm ha high-availability cluster nginx keepalived istio prometheus traefik

kubeadm-ha's People

Contributors

cookeem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeadm-ha's Issues

istio的使用方法

非常感谢您的项目,很强大,自己摸索着结合项目中1.11和1.9的教程,搭起来了1.11的单master集群。
想请教一下项目主在实际工作中是如何使用istio的。 @cookeem

ip address

In hosts list, masters have ip address 192.168.20.20 ~ 22 and VIP have this ip 192.168.20.10 but in create-config.sh script you write this:
export K8SHA_VIP=192.168.60.79

master01 ip address

export K8SHA_IP1=192.168.60.72

master02 ip address

export K8SHA_IP2=192.168.60.77

master03 ip address

export K8SHA_IP3=192.168.60.78
I think there's something wrong.

Worker nodes not visible via kubectl get node on VMs with several network interfaces

Good day,

First of all, thank you for thorough guide. It helped me a lot!

Problem description
OS: CentOS Linux release 7.4.1708 (Core)

I'm using VMs created by vagrant and by default have several network interfaces

eth0:  flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::5054:ff:fead:3b43  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:ad:3b:43  txqueuelen 1000  (Ethernet)
        RX packets 141997  bytes 158528777 (151.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 40171  bytes 2511712 (2.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.120.10  netmask 255.255.255.0  broadcast 192.168.120.255
        inet6 fe80::a00:27ff:fe13:acfa  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:13:ac:fa  txqueuelen 1000  (Ethernet)
        RX packets 127504  bytes 39089921 (37.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 109964  bytes 14854427 (14.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

master1 has ip 192.168.120.10
master2 - 192.168.120.82
master3 - 192.168.120.83

Following step

on devops-master01: use kubeadm to init a kubernetes cluster, notice: you must save the following message: kubeadm join --token XXX --discovery-token-ca-cert-hash YYY , this command will use lately.

I recive join on eth0 network interface.
kubeadm join --token 7f276c.0741d82a5337f526 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:c1c15936be9b5c4429cf14074706927a410a150ccb334d6823257cd450f2fe42

Adding worker nodes with this line corrupts result of kubectl get node. There are only master nodes.

My solution
If i edit kubeadm-init.yaml and add advertiseAddress. Everything works fine and kubectl get node also return worker nodes.

Kubeadm-init.yaml example:

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 192.168.120.83
kubernetesVersion: v1.9.1
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
apiServerCertSANs:
- kuber.master
- kuber.master2
- kuber.master3
- 192.168.120.10
- 192.168.120.82
- 192.168.120.83
- 192.168.120.2
- 127.0.0.1
etcd:
  endpoints:
  - http://192.168.120.10:2379
  - http://192.168.120.82:2379
  - http://192.168.120.83:2379
token: 7f276c.0741d82a5337f526
tokenTTL: "0"

Questions

  1. Am i doing something wrong?
  2. Maybe we should add advertiseAddress in guide/your project workflow?

nginx-lb.conf

Hello,

I am really excited to find this post.
One thing I find, I think this is error.

/nginx-lb/docker-compose.yaml

volumes:
- /root/kube-yaml/nginx-lb/nginx-lb.conf:/etc/nginx/nginx.conf

I think "kube-yaml" directory name is hard coded. I have this file at

/root/kubeadm-ha/nginx-lb/nginx-lb.conf

master-2的加入etcd集群后服务不可用

RT。
使用的k8s版本是1.11.2,按照1.11.1的操作逐步进行,每一次到master2加入etcd集群后整个服务就不可用了。表现就是kubectl没有响应。在master1种docker ps 可以看到etcd容器失败了在不断重启。我把master2中 /etc/kubernetes/manifests/ 目录下面的etcd.yaml移除后,master1的etcd重启后正常,但是这时kubectl命令提示请求:6443错误,是不是etcd数据全丢了,这种情况要怎么办呢?

kubectl exec always timeout

I'm using 1.7.x to set up ha k8s. Everything is fine except I always got timeout when kubectl exec -it <pod-name> -- /bin/sh. The error are like

kubectl -n kube-system exec -it kube-scheduler-app-web39v33 -- /bin/sh
Error from server: error dialing backend: dial tcp 10.83.1.1:10250: getsockopt: connection timed out

Q&A

Hello,

I have a question. Though this is not related to Kube-HA, since you are the expert, I would like to ask..
Do we not need to set up Ingress Controller?
what kind of features would Ingress Controller bring to this HA setup?

Thanks,
Rock

node加入集群后/etc/kubernetes/kubelet.conf需要修改api地址为HA-IP

只有kube-proxy指向的是HA-IP
root@ubuntu:/etc/kubernetes# netstat -alnp |grep 6443
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 1337/haproxy
tcp 0 0 192.168.1.148:33958 192.168.4.130:6443 ESTABLISHED 22089/kube-proxy
tcp 0 0 192.168.1.148:43178 192.168.1.146:6443 ESTABLISHED 21876/kubelet

146为本地IP,4.130为我的HA-IP
/etc/kubernetes/kubelet.conf 修改api地址后

root@ubuntu:/etc/kubernetes# vim /etc/kubernetes/kubelet.conf
root@ubuntu:/etc/kubernetes# systemctl restart docker && systemctl restart kubelet
root@ubuntu:/etc/kubernetes# netstat -alnp |grep 6443
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 1337/haproxy
tcp 0 0 192.168.1.148:34212 192.168.4.130:6443 ESTABLISHED 22795/kubelet
tcp 0 0 192.168.1.148:34232 192.168.4.130:6443 ESTABLISHED 23008/kube-proxy

calico网络插件疑问

记得你之前版本的部署文档中部署了两套网络,想请教以下,想更还cni网络插件的时候需要做那些清理工作,我目前想将flannel调整为calico,master01能正常启动,但是master02和03的calico node不能正常启动。
logs显示calico在master节点上建立的br-****都使用了172.18.0.0/16这个地址段,造成master02,03不能启动,请问有什么解决办法吗?

Add configuring network interface for flannel

According to official flannel documentation there is a known issue with flannel and vagrant:

Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address 10.0.2.15, is for external traffic that gets NATed.

This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the --iface eth1 flag to flannel so that the second interface is chosen.

Can you please add network interface options for flannel in your cannal.yaml?

新建的集群就报证书过期

您好,我新建的集群,上午还好好的。下午就报

[root@k8s-master01 ~]# kubectl get po
No resources found.
Unable to connect to the server: x509: certificate has expired or is not yet valid

然后我尝试用kubeadm alpha phase certs all --config /root/kubeadm-config.yaml重新生成证书,但是也报错

[root@k8s-master01 ~]# kubeadm alpha phase certs all --config /root/kubeadm-config.yaml
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
failure loading ca certificate: the certificate is not valid yet

网上说是时间不同步的问题,但是我的时间是同步的。请问您有遇到过么?或者怎么更换证书呢

KUBECONFIG settings cannot be used in general user account

If user runs in the general account, I suggest run the following instruction instead of "export KUBECONFIG=/etc/kubernetes/admin.conf".

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config

Any ubuntu version ????

Hello,
very good job!
Is there any ubuntu (or debian based) version of this script ??

Best

Kubelet service is down

When I was trying to bring up the server, it shows as down.

error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Is there any solutions start kubelet service

Grafana import not work with ID dashboard from grafana.com

everything's ok but Grafana import not work . Error : {"data":"","status":502,"config":{"method":"GET","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","url":"api/gnet/dashboards/2","retry":0,"headers":{"X-Grafana-Org-Id":1,"Accept":"application/json, text/plain, /"}},"statusText":"Bad Gateway","xhrStatus":"complete","isHandled":true}

i have a problem about nginx proxy

hello~
it was very helpful your opinion.

my kubernetes version 1.12.1
and os is ubuntu 16.0.4

i did all kubernetes HA setting and
install dashboard
so my url was http://VIP:30000

but it was not connected..
in my server curl -k http://VIP:16443 is worked
but my local did not worked. i registerd it in my hosts file ..

please help me thank you
thank you~!

楼主你好。关于flannel和calico

官方文档里面,kubeadm init的时候 一般只需要一种网络模式就好了。
这里同时用了flannel和calico,是不是可以这么理解,flannel是用在master nodes之间的组网? calico是cluster node之间的网络?
谢谢!

k8s HA cluster setup

@cookeem
I have a few questions about creating a k8s HA cluster using kubeadm.

  1. In your instruction, you mentioned that starting from version v1.7.0,
    kubernetes uses NodeRestriction admission control that prevents other master from
    joining the cluster.
    As a work around, you reset the kube-apiserver's admission-control settings to
    the v1.6.x recommended config.

So, did you figure out how to make it work with NodeRestriction admission control?

  1. It appears to me that your solution works. I also noticed there has been
    some work to make kubeadm HA available in 1.9: kubernetes/kubeadm#261
    Do you know exactly how your HA setup is different from the one being working on there?

  2. I also notice there is another approach for creating a k8s HA cluster:
    https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md
    Just curious how you would compare this approach with yours. Any thoughts?
    Thank you for your time.

couldn't parse external etcd version

I did a kubeadm init --config=kubeadm-init.yaml and got the below error while initialising.

[ERROR ExternalEtcdVersion]: couldn't parse external etcd version "": Version string empty

Question on Virtual IP

In your post you mentioned the below steps for workers.
on all kubernetes worker nodes: set the /etc/kubernetes/bootstrap-kubelet.conf server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443)

does keepalived needs to be installed on workers too? without that how would the workers reach the virtual up?

v1.11.1 安装部署问题,pod创建失败

@cookeem 大神好,一直在关注您的k8s安装教程,我这边试了一下v1.11.1-ha的搭建,安装到keeplived、nginx-lb为止的一些列步骤都成功了,很赞!中间有部分问题,自行解决了,到部署metics-server和dashboard时提示创建失败,podIp未分配,不知是什么原因,我把相关日志截图给您看看,怀疑是防火墙问题,我把3个master防火墙都停掉了,但问题依旧

1

2

3

Calico and flanneld

Hello,

I really liked the post, I'm implementing it and with some doubts.

Why did you use Calico and flanneld?

What is the function of each of them in the cluster?

I imagined they had the same function.

Thanks

exited due to signal 15 when check keepalived status

When I checked the keepalived status, I found it´s not work. I use an external VIP 10.159.222.x k8s-master-lb, and the external VIP have ability to point to the 3 masters, I configured 3 masters successfully, but it seems the keepalived are not work correctly. would you please help to confirm below questions:

  1. kubeadm-ha solution can use the external VIP?
  2. the pre-assigned VIP for your case 192.168.20.10 / k8s-master-lb, who will create it and how does it combine with the 3 masters IP and hosts?

$ systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2019-01-31 15:16:05 WET; 2h 9min ago
Process: 14343 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 14344 (keepalived)
Tasks: 6
Memory: 5.7M
CGroup: /system.slice/keepalived.service
├─14344 /usr/sbin/keepalived -D
├─14345 /usr/sbin/keepalived -D
├─14346 /usr/sbin/keepalived -D
├─20865 /usr/sbin/keepalived -D
├─20866 /bin/bash /etc/keepalived/check_apiserver.sh
└─20871 sleep 5

Jan 31 17:25:16 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:21 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:26 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:31 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:36 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:41 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:46 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:51 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:25:56 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15
Jan 31 17:26:01 k8s-master02 Keepalived_vrrp[14346]: /etc/keepalived/check_apiserver.sh exited due to signal 15

dashboard无法连接

您好,我已经安装完所有组件如下:
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-kwz9t 2/2 Running 0 3h
calico-node-rj8p8 2/2 Running 0 3h
calico-node-xfsg5 2/2 Running 0 3h
coredns-777d78ff6f-4rcsb 1/1 Running 0 4h
coredns-777d78ff6f-7xqzx 1/1 Running 0 4h
etcd-k8s-master01 1/1 Running 0 4h
etcd-k8s-master02 1/1 Running 0 4h
etcd-k8s-master03 1/1 Running 9 3h
heapster-5874d498f5-q2gzx 1/1 Running 0 13m
kube-apiserver-k8s-master01 1/1 Running 0 3h
kube-apiserver-k8s-master02 1/1 Running 0 3h
kube-apiserver-k8s-master03 1/1 Running 1 3h
kube-controller-manager-k8s-master01 1/1 Running 0 3h
kube-controller-manager-k8s-master02 1/1 Running 1 3h
kube-controller-manager-k8s-master03 1/1 Running 0 3h
kube-proxy-4cjhm 1/1 Running 0 4h
kube-proxy-lkvjk 1/1 Running 2 4h
kube-proxy-m7htq 1/1 Running 0 4h
kube-scheduler-k8s-master01 1/1 Running 2 4h
kube-scheduler-k8s-master02 1/1 Running 0 4h
kube-scheduler-k8s-master03 1/1 Running 2 3h
kubernetes-dashboard-7954d796d8-2k4hx 1/1 Running 0 7m
metrics-server-55fcc5b88-r8v5j 1/1 Running 0 13m
monitoring-grafana-9b6b75b49-4zm6d 1/1 Running 0 45m
monitoring-influxdb-655cd78874-lmz5l 1/1 Running 0 45m

节点信息如下:
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 4h v1.11.1
k8s-master02 Ready master 4h v1.11.1
k8s-master03 Ready master 4h v1.11.1
通过curl访问dashboard的接口30000如下:
[root@k8s-master01 ~]# curl -k https://k8s-master-lb:30000
<!doctype html> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.b5ad51ac.js"></script>

但是浏览器无法连接这个页面,请问您知道什么原因吗?
image

dashboard的pod日志如下:
[root@k8s-master01 ~]# kubectl logs kubernetes-dashboard-7954d796d8-2k4hx -n kube-system
2018/10/31 10:19:21 Starting overwatch
2018/10/31 10:19:21 Using in-cluster config to connect to apiserver
2018/10/31 10:19:21 Using service account token for csrf signing
2018/10/31 10:19:21 No request provided. Skipping authorization
2018/10/31 10:19:21 Successful initial request to the apiserver, version: v1.11.1
2018/10/31 10:19:21 Generating JWE encryption key
2018/10/31 10:19:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/10/31 10:19:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/10/31 10:19:22 Initializing JWE encryption key from synchronized object
2018/10/31 10:19:22 Creating in-cluster Heapster client
2018/10/31 10:19:22 Auto-generating certificates
2018/10/31 10:19:22 Successfully created certificates
2018/10/31 10:19:22 Serving securely on HTTPS port: 8443
2018/10/31 10:19:22 Successful request to heapster
2018/10/31 10:23:50 http: TLS handshake error from 172.168.0.1:55803: tls: first record does not look like a TLS handshake
2018/10/31 10:25:04 http: TLS handshake error from 172.168.0.1:55814: tls: first record does not look like a TLS handshake
2018/10/31 10:28:34 http: TLS handshake error from 172.168.0.1:55888: tls: first record does not look like a TLS handshake

Heapster doesn't work

Hello, first of all thanks for you HOWTO, it made possible to easily create an HA k8s cluster in about 1-hour! I've used the 1.7 version, just replacing the 1.7.0 components for the latest minor version (1.7.8), everything works fine a part from Heapster. There's no way to get the Dashboard to display anything, and I can't see any useful message in logs. What could it be?

i have a problem "add etcd member to the cluster"

hello~
i have a question about "add etcd member to the cluster"

my kubernetes version 1.12.1
and os is ubuntu 16.0.4

i did
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/tcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380

but i got
Unable to connect to the server: dial tcp 178.128.174.200:6443: i/o timeout

this error.. firewalld was all down and i copied cert files already..

please help me thank you

Language Preference

How can I select english in K8s deployment while I follow instructions using your repo.

一直处于creating状态

环境单master没任何问题,部署多master时创建不了容器。

coredns-65dcdb4cf-4vxs4                  0/1       ContainerCreating   0          1h
dacc-d66dfdcc5-m5hcl                     0/1       ContainerCreating   0          42m
eacc-6d9ccfd9b7-kdjfs                    0/1       ContainerCreating   0          42m
Events:
  Type     Reason                  Age                 From               Message
  ----     ------                  ----                ----               -------
  Warning  FailedScheduling        34m (x37 over 44m)  default-scheduler  0/3 nodes are available: 3 PodToleratesNodeTaints.
  Normal   SuccessfulMountVolume   32m                 kubelet, node1     MountVolume.SetUp succeeded for volume "default-token-vh9f5"
  Normal   SandboxChanged          25m (x12 over 31m)  kubelet, node1     Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  1m (x58 over 31m)   kubelet, node1     Failed create pod sandbox.

大概是按照你的方式来的,不过也有点差异,LB用了个HAproxy

三个master使用的是相同的根证书ca.crt ca.key然后其它的各自生成的。 三个master都执行了kubeadm init

[root@node1 ~]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master1   Ready     master    1h        v1.9.1
master2   Ready     master    1h        v1.9.1
master3   Ready     master    1h        v1.9.1
node1     Ready     <none>    34m       v1.9.1

kubeadm 配置文件:

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
apiServerCertSANs:
- 172.31.244.231
- 172.31.244.232
- 172.31.244.233
- 172.31.244.234
- master1
- master2
- master3
- node1
- 47.75.1.72

etcd:
  endpoints:
  - http://172.31.244.232:2379

apiServerExtraArgs:
  endpoint-reconciler-type: lease

networking:
  podSubnet: 192.168.0.0/16
kubernetesVersion: v1.9.1
featureGates:
  CoreDNS: true

不知道问题出在哪儿?

一个小请求

楼主能否分享一下,基于k8s集群,利用Jenkins 做持续集成和构建,然后自动将代码上线到生产环境。

您好,问下K8SHA_CALICO_REACHABLE_IP

calico reachable ip address

export K8SHA_CALICO_REACHABLE_IP=192.168.60.1
您好,这一步的参数的192.168.60.1这个IP是我们服务器的网关地址吗?还是calico自用的IP和内网环境无关?

Check keepalived work

Hello How check that keepalived work
i ping from one masternode virtal ip and get
ping 10.10.61.154
PING 10.10.61.154 (10.10.61.154) 56(84) bytes of data.
From 10.10.61.211 icmp_seq=1 Destination Host Unreachable
From 10.10.61.211 icmp_seq=2 Destination Host Unreachable
So it doen't work
How config router for kepalived work correctly?
I also use proxy server to access inernet addresse
On another stand it works fine There is no proxy there and router is more simple

calico authentication error to access apiserver

Hi sir,
I just try your newest updates based on canal. However, I just got stuck at deployment of canal. I find calico-node try to access 10.96.0.1:443 ( I think that is apiserver). Then, I see the apiserver prompt lots of authentication error like "Unable to authenticate the request due to an error". I tried to delete all secrets to make the secrets re-generated, but it doesn't work. Have you suffered from the same trouble or had any experiences to handle this?

Besides, I also want to ask you a question about clean removal of kubernetes. Actually, I tried "kubeadm reset" all the things (I have drained and deleted node first) and then follow your commands to remove the files. However, I still find calico pods initialized after kubeadm init automatically. Do you have any ideas about this?

Thanks,
Augustin

nodes are not joined

Using v1.7, nodes are not joined. I scp /etc/kubernetes to other masters, then systemctl daemon-reload && systemctl restart kubelet followed by systemctl status kubelet. It is running; however only the initial node shows up. Should we not be sing the kubeadm join command around this point?

Testing k8s ha configuration by shutting down the first k8s master node

@cookeen, I followed your provided instruction and was able to deploy a HA Kubernetes cluster (with 3 k8s master nodes and 2 k8s nodes) using Kubernetes version 1.8.1
Everything seems working just like you described in instruction.

Next, I focused on testing the high availablity configuration. To do so, I attempted to shutdown the first k8s master. Once the first k8s master is brought down, the keepalived service on this node stopped and the virtual IP address transferred to the second k8s master. However, things start falling apart :(

Specifically, on the second (or third) master, when running the command: 'kubectl get nodes', the output shows something like the following:

NAME STATUS ROLES ...
k8s-master1 NotReady master ...
k8s-master2 Ready ...
k8s-master3 Ready ...
k8s-node1 Ready ...
k8s-node2 Ready ...

Also, on k8s-master2 or k8s-master3, when I ran 'kubectl logs' to check controller-manager and
scheduler, it appeared they did NOT reelect a new leader. As a result, all of the kubernetes services that were exposed before were no longer accessible.

Do you have any idea why the reelection process did NOT occur for the controller-manager and
scheduler on the remaining k8s master nodes?

apiserver.ext does not exist

I am trying out v1.7. In the section to create certificates, I'm asked to edit this file: apiserver.ext

It does not exist either in current directory, or on the filesystem at all. Something is amiss.

初始化异常

初始化配置文件,podSubnet
networking:
podSubnet: 10.244.0.0/16

但是初始化的日志显示如下内容:
Feb 7 08:23:46 master1 kubelet: I0207 08:23:46.268481 6924 kuberuntime_manager.go:918] updating runtime config through cri with podcidr 10.244.0.0/24
Feb 7 08:23:46 master1 kubelet: I0207 08:23:46.268679 6924 docker_service.go:343] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Feb 7 08:23:46 master1 kubelet: I0207 08:23:46.268838 6924 kubelet_network.go:196] Setting Pod CIDR: -> 10.244.0.0/24

这是什么情况呢?之前我部署集群的时候发现flannel启动失败的问题,但是意外发现了如下的日志信息,着造成了flannel的networking字段与CIDR不一致,我猜测是flannel启动不成功的原因,之后修改flannel的networking进行匹配,成功启动。

k8s 1.12.x

Hi, do u plan to upgrade this wonderful code to k8s version 1.12.x?

really appreciate your sharing this code. help me on setup k8s with HA.

thank you very much

verify installation

Hi cookeem !

In last steop deploy nginx application to verify installation, I must use an yml file to create Pod and Service.

I need config about hostnetwork in Deploy to query example (see: https://github.com/projectcalico/felix/issues/1361)

If don't have "hostNetwork: true", command curl always timeout.

this my sample:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    app: my-nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      hostNetwork: true
      containers:
      - name: nginx
        image: nginx
        ports:
          - containerPort: 80

can I use 127.0.0.1 or a domain when keepalived is not an option?

In my case, we cannot use keepalived. Therefor we have to use other ways.
I only think of two ways to get it done.

  1. every node start a nginx to load three api servers. the node just join 127.0.0.1:8443 (8443 is the nginx port which load to real api servers.
  2. use a domain such like k8s.mycompany.com.

I only test the first one, but I didn't work well.
I don't know if I missing something when configuration.

Can you give some advices on this ?
Any tips are appreciated.
Thank you very much.

istio是做什么用的?

您好,请问istio是做什么用的?之前没有用过,这个不装可以吗?
装的时候报错:

horizontalpodautoscaler.autoscaling/istio-pilot created
service/jaeger-query created
service/jaeger-collector created
service/jaeger-agent created
service/zipkin created
service/tracing created
mutatingwebhookconfiguration.admissionregistration.k8s.io/istio-sidecar-injector created
unable to recognize "istio/istio-demo.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "istio/istio-demo.yaml": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "stdio" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "logentry" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "logentry" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "prometheus" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "kubernetesenv" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "kubernetes" in version "config.istio.io/v1alpha2"
unable to recognize "istio/istio-demo.yaml": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
unable to recognize "istio/istio-demo.yaml": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.