GithubHelp home page GithubHelp logo

techiescamp / vagrant-kubeadm-kubernetes Goto Github PK

View Code? Open in Web Editor NEW
658.0 20.0 663.0 213 KB

Vagrantfile & Scripts to setup Kubernetes Cluster using Kubeadm for CKA, CKAD and CKS practice environment

Home Page: https://devopscube.com/kubernetes-cluster-vagrant/

License: GNU General Public License v3.0

Shell 100.00%
kubernetes kubernetes-cluster kubernetes-setup kubeadm kubeadm-cluster kubeadmin-kubernetes cka ckad cks

vagrant-kubeadm-kubernetes's Introduction

Vagrantfile and Scripts to Automate Kubernetes Setup using Kubeadm [Practice Environment for CKA/CKAD and CKS Exams]

A fully automated setup for CKA, CKAD, and CKS practice labs is tested on the following systems:

  • Windows
  • Ubuntu Desktop
  • Mac Intel-based systems

If you are MAC Silicon user, Please use the follwing repo.

CKA, CKAD, CKS, KCNA & KCSA Vouchers Codes (35% OFF)

As part of our commitment to helping the DevOps community save money on Kubernetes Certifications, we continuously update the latest voucher codes from the Linux Foundation

๐Ÿš€ CKA, CKAD, CKS,KCNA or KCSA exam aspirants can save 20% today using code DCUBE20 at https://kube.promo/devops. It is a limited-time offer from the Linux Foundation.

The following are the best bundles to save up to 35% with code COMBUNDLE25

Use code SCRIPT20 to save $326 with the following bundle.

Note: You have one year of validity to appear for the certification exam after registration

Setup Prerequisites

  • A working Vagrant setup using Vagrant + VirtualBox

Documentation

Current k8s version for CKA, CKAD, and CKS exam: 1.29.

The setup is updated with 1.29 cluster version.

Refer to this link for documentation full: https://devopscube.com/kubernetes-cluster-vagrant/

Prerequisites

  1. Working Vagrant setup
  2. 8 Gig + RAM workstation as the Vms use 3 vCPUS and 4+ GB RAM

For MAC/Linux Users

The latest version of Virtualbox for Mac/Linux can cause issues.

Create/edit the /etc/vbox/networks.conf file and add the following to avoid any network-related issues.

* 0.0.0.0/0 ::/0

or run below commands

sudo mkdir -p /etc/vbox/
echo "* 0.0.0.0/0 ::/0" | sudo tee -a /etc/vbox/networks.conf

So that the host only networks can be in any range, not just 192.168.56.0/21 as described here: https://discuss.hashicorp.com/t/vagrant-2-2-18-osx-11-6-cannot-create-private-network/30984/23

Bring Up the Cluster

To provision the cluster, execute the following commands.

git clone https://github.com/scriptcamp/vagrant-kubeadm-kubernetes.git
cd vagrant-kubeadm-kubernetes
vagrant up

Set Kubeconfig file variable

cd vagrant-kubeadm-kubernetes
cd configs
export KUBECONFIG=$(pwd)/config

or you can copy the config file to .kube directory.

cp config ~/.kube/

Install Kubernetes Dashboard

The dashboard is automatically installed by default, but it can be skipped by commenting out the dashboard version in settings.yaml before running vagrant up.

If you skip the dashboard installation, you can deploy it later by enabling it in settings.yaml and running the following:

vagrant ssh -c "/vagrant/scripts/dashboard.sh" controlplane

Kubernetes Dashboard Access

To get the login token, copy it from config/token or run the following command:

kubectl -n kubernetes-dashboard get secret/admin-user -o go-template="{{.data.token | base64decode}}"

Make the dashboard accessible:

kubectl proxy

Open the site in your browser:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

To shutdown the cluster,

vagrant halt

To restart the cluster,

vagrant up

To destroy the cluster,

vagrant destroy -f

Network graph

                  +-------------------+
                  |    External       |
                  |  Network/Internet |
                  +-------------------+
                           |
                           |
             +-------------+--------------+
             |        Host Machine        |
             |     (Internet Connection)  |
             +-------------+--------------+
                           |
                           | NAT
             +-------------+--------------+
             |    K8s-NATNetwork          |
             |    192.168.99.0/24         |
             +-------------+--------------+
                           |
                           |
             +-------------+--------------+
             |     k8s-Switch (Internal)  |
             |       192.168.99.1/24      |
             +-------------+--------------+
                  |        |        |
                  |        |        |
          +-------+--+ +---+----+ +-+-------+
          |  Master  | | Worker | | Worker  |
          |   Node   | | Node 1 | | Node 2  |
          |192.168.99| |192.168.| |192.168. |
          |   .99    | | 99.81  | | 99.82   |
          +----------+ +--------+ +---------+

This network graph shows:

  1. The host machine connected to the external network/internet.
  2. The NAT network (K8s-NATNetwork) providing a bridge between the internal network and the external network.
  3. The internal Hyper-V switch (k8s-Switch) connecting all the Kubernetes nodes.
  4. The master node and two worker nodes, each with their specific IP addresses, all connected to the internal switch.

vagrant-kubeadm-kubernetes's People

Contributors

allyunion avatar bibinwilson avatar chieftainy2k avatar chriswells0 avatar dswhitely1 avatar hodgiwabi avatar hollowman6 avatar inkkim avatar jxlwqq avatar luisgmuniz avatar magicalbob avatar maximmai avatar mhrdq8i avatar rajivgangadharan avatar ramanagali avatar sbruzzese902 avatar scriptcamp avatar techiescamp avatar thuzxj avatar toelke avatar vladislavbannikov avatar wils93 avatar yishaiz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vagrant-kubeadm-kubernetes's Issues

Kubernetes default service timeout

Error:

dial tcp: lookup kubernetes.default.svc : i/o timeout

The issue gets fixed after restarting the coreDNS pods.

There is no issue with newly created services.

Nodes should sleep, or wait, before trying to join

I'm currently adjusting your scripts so they work in my ESXi homelab.

One thing I've noticed is that the nodes will complete their first stages of setup before the master is ready. This leads to the nodes trying to run "join.sh", which doesn't exist yet because the master hasn't generated it yet.

Ideally, the nodes should test whether the master is up and running before running "join.sh". Or better yet, simply test whether the file has been created with an until-loop.

Suggestion:

echo "Sleeping until K8s master is ready..."
until [[ -s /vagrant/join.sh ]]
do
    sleep 10
done

EDIT:
Ah, that might still cause a problem, because the master.sh scripting first creates an empty version of the file before actually putting a config in it. We should also test that the contents are not-null. Only then should the client try to run join.sh. So the test should use "-s", not "-f".

GPG error while installing kubectl,kubelet,kubeadm packages

Hi,all

I'm trying to deploy a kubernetes cluster to practice CKA exam. I followed kubernetes documentation last night and got the same error message. I assume that I did something wrong, then I tried this repo which a friend of mine suggested. After "vagrant up" command I got following error messages and the progress stopped with a non-zero exit status.

I am thinking of two possible causes to this error:
1- The key was not properly added to the keyring since apt-key is deprecated and ubuntu is suggesting to use trusted.gpg.d
2- The key is outdated.

master: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05 master: Reading package lists... master: W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05 master: E: The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.

Has anybody faced with this problem? If so, I am open for any solution suggestions. Thanks.

Erroring with master: error: resource name may not be empty

Below are the steps where it errors and exits

master: serviceaccount/admin-user created
master: + cat
master: + kubectl apply -f -
master: secret/admin-user created
master: + cat
master: + kubectl apply -f -
master: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
master: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
master: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
master: error: resource name may not be empty

@bibinwilson @inkkim thoughts ?

debug logs
` master: serviceaccount/admin-user created
DEBUG ssh: stderr: + cat

  • kubectl apply -f -
    INFO interface: detail: + cat
    INFO interface: detail: master: + cat
    master: + cat
    INFO interface: detail: + kubectl apply -f -
    INFO interface: detail: master: + kubectl apply -f -
    master: + kubectl apply -f -
    DEBUG ssh: Sending SSH keep-alive...
    INFO interface: detail: secret/admin-user created
    INFO interface: detail: master: secret/admin-user created
    master: secret/admin-user created
    DEBUG ssh: stderr: + cat
    INFO interface: detail: + cat
    INFO interface: detail: master: + cat
    master: + cat
    DEBUG ssh: stderr: + kubectl apply -f -
    INFO interface: detail: + kubectl apply -f -
    INFO interface: detail: master: + kubectl apply -f -
    master: + kubectl apply -f -
    INFO interface: detail: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    INFO interface: detail: master: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    master: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    DEBUG ssh: stderr: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    INFO interface: detail: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    INFO interface: detail: master: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    master: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    DEBUG ssh: stderr: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    INFO interface: detail: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    INFO interface: detail: master: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    master: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    DEBUG ssh: stderr: error: resource name may not be empty
    INFO interface: detail: error: resource name may not be empty
    INFO interface: detail: master: error: resource name may not be empty
    master: error: resource name may not be empty
    DEBUG ssh: Exit status: 1
    ERROR warden: Error occurred: The SSH command responded with a non-zero exit status. Vagrant
    assumes that this means the command failed. The output for this command
    should be in the log above. Please read the output to determine what
    went wrong.`

bento/ubuntu-22.04 vagrant box update 2 days prevent calico network to initialise

Hi
After Vagrant box for bento/ubuntu-22.04 was updated https://app.vagrantup.com/bento/boxes/ubuntu-22.04 on 2 days ago, the calico network pods are not in running mode. Refer to below for more details.
Error is "Error: container create failed: pivot_root: Invalid argument".
Please advise. Thank you.

kubectl get po -A
NAMESPACE     NAME                                       READY   STATUS                      RESTARTS   AGE
kube-system   calico-kube-controllers-658d97c59c-78lv4   1/1     Running                     0          91s
kube-system   calico-node-mvqw7                          0/1     Init:CreateContainerError   0          91s
kube-system   coredns-76f75df574-8n6wj                   1/1     Running                     0          91s
kube-system   coredns-76f75df574-96d8g                   1/1     Running                     0          91s
kube-system   etcd-controlplane                          1/1     Running                     0          104s
kube-system   kube-apiserver-controlplane                1/1     Running                     0          104s
kube-system   kube-controller-manager-controlplane       1/1     Running                     0          104s
kube-system   kube-proxy-hxnk9                           1/1     Running                     0          91s
kube-system   kube-scheduler-controlplane                1/1     Running                     0          104s
kube-system   metrics-server-d4dc9c4f-znwdn              0/1     Pending                     0          91s
kubectl describe po calico-node-mvqw7 -n kube-system
Name:                 calico-node-mvqw7
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      calico-node
Node:                 controlplane/192.168.96.116
Start Time:           Sat, 10 Aug 2024 00:07:49 +0000
Labels:               controller-revision-hash=574c44bccd
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.96.116
IPs:
  IP:           192.168.96.116
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  cri-o://bf6974f1a6c71723bf9f2a70d2c28ccf083eba89aac0e4cdb2b4ca8178aefd2e
    Image:         docker.io/calico/cni:v3.25.0
    Image ID:      docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 10 Aug 2024 00:08:02 +0000
      Finished:     Sat, 10 Aug 2024 00:08:02 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5hnn (ro)
  install-cni:
    Container ID:  cri-o://4a3d1b676e31d696a440b41ff038b6c016c050c34804fbf89bdf473942416db4
    Image:         docker.io/calico/cni:v3.25.0
    Image ID:      docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 10 Aug 2024 00:08:02 +0000
      Finished:     Sat, 10 Aug 2024 00:08:04 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5hnn (ro)
  mount-bpffs:
    Container ID:
    Image:         docker.io/calico/node:v3.25.0
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      calico-node
      -init
      -best-effort
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /nodeproc from nodeproc (ro)
      /sys/fs from sys-fs (rw)
      /var/run/calico from var-run-calico (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5hnn (ro)
Containers:
  calico-node:
    Container ID:
    Image:          docker.io/calico/node:v3.25.0
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      CALICO_IPV6POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_HEALTHENABLED:                true
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/bpf from bpffs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5hnn (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 False
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sys-fs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  bpffs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  Directory
  nodeproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  kube-api-access-j5hnn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  114s               default-scheduler  Successfully assigned kube-system/calico-node-mvqw7 to controlplane
  Normal   Pulling    113s               kubelet            Pulling image "docker.io/calico/cni:v3.25.0"
  Normal   Pulled     101s               kubelet            Successfully pulled image "docker.io/calico/cni:v3.25.0" in 11.461s (11.461s including waiting)
  Normal   Created    101s               kubelet            Created container upgrade-ipam
  Normal   Started    101s               kubelet            Started container upgrade-ipam
  Normal   Pulled     101s               kubelet            Container image "docker.io/calico/cni:v3.25.0" already present on machine
  Normal   Created    101s               kubelet            Created container install-cni
  Normal   Started    101s               kubelet            Started container install-cni
  Normal   Pulling    99s                kubelet            Pulling image "docker.io/calico/node:v3.25.0"
  Normal   Pulled     80s                kubelet            Successfully pulled image "docker.io/calico/node:v3.25.0" in 12.332s (18.852s including waiting)
  Warning  Failed     13s (x7 over 80s)  kubelet            Error: container create failed: pivot_root: Invalid argument
  Normal   Pulled     13s (x6 over 80s)  kubelet            Container image "docker.io/calico/node:v3.25.0" already present on machine

NO_PUBKEY B53DC80D13EDEF05

master: Err:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
master:   The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
master: Reading package lists...
master: W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05

Why kube-apiserver container is trying to find containerID when it start by kubelet

When I was accidently quit virtualbox VM and then I just boot up vagrant.
kube-apiserver is trying to restart by kubelet.

root@master-node:/etc/kubernetes/manifests# crictl ps
CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
0493210a252eb       deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3   49 seconds ago      Running             kube-apiserver            10                  73b158a97e9d0       kube-apiserver-master-node
5c3e62e7edeb4       655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f   39 minutes ago      Running             kube-scheduler            2                   d621927fbf32d       kube-scheduler-master-node
c054d83005832       e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d   43 minutes ago      Running             kube-controller-manager   2                   717b0951ba423       kube-controller-manager-master-node
1a1287b77ccb5       46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd   43 minutes ago      Running             kube-proxy                1                   6cf42b47cdf54       kube-proxy-s4h24
f2d57a250e409       fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7   43 minutes ago      Running             etcd                      2                   2d18ec2efa1dd       etcd-master-node

But the container has been printed out with this message.

root@master-node:/etc/kubernetes/manifests# crictl logs kube-apiserver-master-node
E0213 14:37:59.632422   33644 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"kube-apiserver-master-node\": container with ID starting with kube-apiserver-master-node not found: ID does not exist" containerID="kube-apiserver-master-node"
FATA[0000] rpc error: code = NotFound desc = could not find container "kube-apiserver-master-node": container with ID starting with kube-apiserver-master-node not found: ID does not exist

Why kube-apiserver is trying to find containerID which is named kube-apiserver-master-node?

Here is my environment.

  • 2020 MacBookPro 13`
  • last commit: 15d2963

Does anyone has same experience or any solution?

Master node failing while downloading the calico image

master: Running provisioner: shell...
master: Running: C:/Users/chaisri/AppData/Local/Temp/vagrant-shell20230220-17768-1mikv85.sh
master: ++ hostname -s
master: + NODENAME=master-node
master: + sudo kubeadm config images pull
master: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.1
master: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.1
master: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.1
master: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.1
master: [config/images] Pulled registry.k8s.io/pause:3.9
master: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0
master: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3
master: Preflight Check Passed: Downloaded All Required Images
master: + echo 'Preflight Check Passed: Downloaded All Required Images'
master: + sudo kubeadm init --apiserver-advertise-address=10.0.0.10 --apiserver-cert-extra-sans=10.0.0.10 --pod-network-cidr=172.16.1.0/16 --service-cidr=172.17.1.0/18 --node-name master-node --ignore-preflight-errors Swap
master: [init] Using Kubernetes version: v1.26.1
master: [preflight] Running pre-flight checks
master: [preflight] Pulling images required for setting up a Kubernetes cluster
master: [preflight] This might take a minute or two, depending on the speed of your internet connection
master: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
master: [certs] Using certificateDir folder "/etc/kubernetes/pki"
master: [certs] Generating "ca" certificate and key
master: [certs] Generating "apiserver" certificate and key
master: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [172.17.0.1 10.0.0.10]
master: [certs] Generating "apiserver-kubelet-client" certificate and key
master: [certs] Generating "front-proxy-ca" certificate and key
master: [certs] Generating "front-proxy-client" certificate and key
master: [certs] Generating "etcd/ca" certificate and key
master: [certs] Generating "etcd/server" certificate and key
master: [certs] etcd/server serving cert is signed for DNS names [localhost master-node] and IPs [10.0.0.10 127.0.0.1 ::1]
master: [certs] Generating "etcd/peer" certificate and key
master: [certs] etcd/peer serving cert is signed for DNS names [localhost master-node] and IPs [10.0.0.10 127.0.0.1 ::1]
master: [certs] Generating "etcd/healthcheck-client" certificate and key
master: [certs] Generating "apiserver-etcd-client" certificate and key
master: [certs] Generating "sa" key and public key
master: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
master: [kubeconfig] Writing "admin.conf" kubeconfig file
master: [kubeconfig] Writing "kubelet.conf" kubeconfig file
master: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
master: [kubeconfig] Writing "scheduler.conf" kubeconfig file
master: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
master: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
master: [kubelet-start] Starting the kubelet
master: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
master: [control-plane] Creating static Pod manifest for "kube-apiserver"
master: [control-plane] Creating static Pod manifest for "kube-controller-manager"
master: [control-plane] Creating static Pod manifest for "kube-scheduler"
master: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
master: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
master: [apiclient] All control plane components are healthy after 14.506990 seconds
master: [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
master: [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
master: [upload-certs] Skipping phase. Please see --upload-certs
master: [mark-control-plane] Marking the node master-node as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
master: [mark-control-plane] Marking the node master-node as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
master: [bootstrap-token] Using token: w0bti5.n2z0wgyx7o83al3y
master: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
master: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
master: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
master: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
master: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
master: [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
master: [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
master: [addons] Applied essential addon: CoreDNS
master: [addons] Applied essential addon: kube-proxy
master:
master: Your Kubernetes control-plane has initialized successfully!
master:
master: To start using your cluster, you need to run the following as a regular user:
master:
master: mkdir -p $HOME/.kube
master: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
master: sudo chown $(id -u):$(id -g) $HOME/.kube/config
master:
master: Alternatively, if you are the root user, you can run:
master:
master: export KUBECONFIG=/etc/kubernetes/admin.conf
master:
master: You should now deploy a pod network to the cluster.
master: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
master: https://kubernetes.io/docs/concepts/cluster-administration/addons/
master:
master: Then you can join any number of worker nodes by running the following on each as root:
master:
master: kubeadm join 10.0.0.10:6443 --token w0bti5.n2z0wgyx7o83al3y
master: --discovery-token-ca-cert-hash sha256:c62cff80fa0f05347601caf3d22716237f1bc0a555890ae3248e72144ed23214
master: + mkdir -p /root/.kube
master: + sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config
master: ++ id -u
master: ++ id -g
master: + sudo chown 0:0 /root/.kube/config
master: + config_path=/vagrant/configs
master: + '[' -d /vagrant/configs ']'
master: + rm -f /vagrant/configs/config /vagrant/configs/join.sh
master: + cp -i /etc/kubernetes/admin.conf /vagrant/configs/config
master: + touch /vagrant/configs/join.sh
master: + chmod +x /vagrant/configs/join.sh
master: + kubeadm token create --print-join-command
master: + curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
master: % Total % Received % Xferd Average Speed Time Time Time Current
master: Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:01:02 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:02:06 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:02:09 --:--:-- 0
master: curl: (28) Failed to connect to raw.githubusercontent.com port 443 after 129691 ms: Connection timed out
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Problem with installing while vagrant up command

master: Err:1 http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.25/xUbuntu_20.04  cri-o 1.25.2~0
master:   Error reading from server. Remote end closed connection [IP: 197.155.77.1 80]
master: Fetched 3,717 kB in 15min 35s (3,973 B/s)
master: E: Failed to fetch http://opensuse.mirror.liquidtelecom.com//repositories/devel%3A/kubic%3A/libcontainers%3A/stable%3A/cri-o%3A/1.25/xUbuntu_20.04/amd64/cri-o_1.25.2~0_amd64.deb  Error reading from server. Remote end closed connection [IP: 197.155.77.1 80]
master: E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

Cluster Breaks on every restart

Issue: Cluster breaks on every restart of Vagrant

RCA: swap is not getting disable on restarts.

Solution: Disable swap on every reboot using crontab entry

Issues with kubernetes-xenial Release

Hello, first of all thank you for your contribution which helps us a lot. I have a problem when I run vargrant up I get this error:

master: Reading package lists... master: E: The repository 'https://apt.kubernetes.io kubernetes-xenial Release' does not have a Release file. The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.

Can you help me ?

Thanks in advance.

Error when install kubernetes

When step:

  • name: Add an apt signing key for Kubernetes
    apt_key:
    url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
    state: present

  • name: Adding apt repository for Kubernetes
    apt_repository:
    repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
    state: present
    filename: kubernetes.list

  • name: Install Kubernetes binaries
    apt:
    name: "{{ packages }}"
    state: present
    update_cache: yes
    vars:
    packages:
    - "kubelet={{ kubernetes_version }}"
    - "kubeadm={{ kubernetes_version }}"
    - "kubectl={{ kubernetes_version }}"

Return error: fatal: [k8s-master]: FAILED! => {"changed": false, "msg": "Failed to update apt cache: E:The repository 'https://apt.kubernetes.io kubernetes-xenial Release' does not have a Release file."}

image

vagrant up not working due to repository signing issue

master: W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
master: E: The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Networking issue in initial cluster setup with no network connectivity

After setting up the initial cluster, it is discovered that there is no network connectivity. Attempted to troubleshoot the issue by running the following commands:

kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -it dnsutils -- nslookup kubernetes.default

The expected response is not observed, and the following message is received:

;; connection timed out; no servers could be reached

Upon inspecting the /etc/resolv.conf file in the Pod, it is noted that the nameserver correctly contains the address of the kube-dns service.

What could be the underlying cause of the network issue? Need to clarify whether this setup is intentional in the cluster environment.

Reference: Kubernetes DNS Debugging and Resolution

Proposal for VMware Desktop Integration in Vagrantfile for MacOS M2 Compatibility

I've been working with the Vagrantfile provided, and I must say, it's exceptionally well-structured and easy to follow โ€“ great job on this!

I wanted to discuss a small adjustment that could enhance our workflow, particularly for MacOS users. As you might be aware, VirtualBox currently faces some compatibility issues with MacOS, especially with the newer M2 chips. This has led to some challenges in maintaining a consistent development environment across our team.

To ensure smooth and efficient development for everyone, I was wondering if we could consider configuring our Vagrant environment to use VMware Desktop, particularly for MacOS M2 users. VMware Desktop is known for its robust support and compatibility with MacOS, and this change could significantly improve our development experience.

I understand this might require some additional adjustments to our current setup, and I'm more than willing to assist or lead the transition if needed. Your thoughts and guidance on this matter would be greatly appreciated.

Thank you for considering this request and for your continuous support in creating an effective development environment for all of us.

Vagrant up fails when applying dashboard

The "DASHBOARD_VERSION" determined by:
https://github.com/techiescamp/vagrant-kubeadm-kubernetes/blob/main/scripts/dashboard.sh#L9

is including a return "\r" character and it blocks the installation scripts.

    node01: + sudo -i -u vagrant kubectl apply -f $'https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0\r/aio/deploy/recommended.yaml'
    node01: error: the URL passed to filename "https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0\r/aio/deploy/recommended.yaml" is not valid: parse "https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0\r/aio/deploy/recommended.yaml": net/url: invalid control character in URL
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

metrics-server pod fails readiness prob

Following the instructions and after running vagrant up when I checked all pods in the kube-system namespace, I found the metrics-server pod is not in ready state. This prevents kubectl top ... commands from running successfully.

calico-kube-controllers-7c845d499-r8t4l   1/1     Running   0          5m10s
calico-node-fx4k7                         1/1     Running   0          2m22s
calico-node-r8n9j                         1/1     Running   0          5m10s
coredns-64897985d-wlnx6                   1/1     Running   0          5m10s
coredns-64897985d-zt999                   1/1     Running   0          5m10s
etcd-master-node                          1/1     Running   0          5m19s
kube-apiserver-master-node                1/1     Running   0          5m19s
kube-controller-manager-master-node       1/1     Running   0          5m26s
kube-proxy-4nxf9                          1/1     Running   0          2m22s
kube-proxy-vzhl9                          1/1     Running   0          5m10s
kube-scheduler-master-node                1/1     Running   0          5m19s
metrics-server-99c6c96cf-8jdpm            0/1     Running   0          5m10s

I could not find a way to revive this following other related posts that mainly focuses on using --kubelet-insecure-tls which is already present. Not sure why this is stuck in this way.

Update Ubuntu Version

The current vagrant setup is based on the 21.10 ubuntu version which is now being deprecated and it does not have any support from the ubuntu team end of life.
As a result when I tried to run the vagrant setup with vagrant up I got several errors from the command apt-get update -y saying that the links has no release files
E: The repository 'http://us.archive.ubuntu.com/ubuntu impish Release' no longer has a Release file.
The best solution would be to move to 20.04 which has support until 2030.

Fail trying to reach dashboard

When run kubectl proxy and access http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login, below error occurs.

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "error trying to reach service: dial tcp 10.85.0.2:8443: connect: connection refused",
  "reason": "ServiceUnavailable",
  "code": 503
}

Is there any other solution?

VBoxManage.exe hostonlyif create error

Vagrant failed in Win11

$ vagrant up --provider=virtualbox
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'node01' up with 'virtualbox' provider...
==> master: Checking if box 'bento/ubuntu-22.04' version '202206.13.0' is up to date...
==> master: Clearing any previously set network interfaces...
There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "create"]

Stderr: 0%...
Progress state: E_INVALIDARG
VBoxManage.exe: error: Failed to create the host-only adapter
VBoxManage.exe: error: Assertion failed: [!aInterfaceName.isEmpty()] at 'F:\tinderbox\win-6.1\src\VBox\Main\src-server\HostNetworkInterfaceImpl.cpp' (76) in long __cdecl HostNetworkInterface::init(class com::Utf8Str,class com::Utf8Str,class com::Guid,enum __MIDL___MIDL_itf_VirtualBox_0000_0000_0046).
VBoxManage.exe: error: Please contact the product vendor!
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage.exe: error: Context: "enum RTEXITCODE __cdecl handleCreate(struct HandlerArg *)" at line 95 of file VBoxManageHostonly.cpp

HTTPS error when pulling images from registry.

  • Verifying image exists in registry with command: curl $repo/v2/_catalog results in output:
    {"repositories":["simpleapp"]}
  • Output from kubectl describe pod POD command below.

![image](https://user-images.githubusercontent.com/36643484/156789088-9e4ee525-e4de-4f0a-b077-82d279451143.png)

[Feature Request] Kubernetes cluster with Containerd runtime

Hi,

There are some feature which are available in containerd but not in cri-o and other way around.
Can we add containerd option as well to this.

May be adding some flag/key in settings.yaml, it uses that key to either spinup cluster based on containerd or cri-o.

I will be happy to help on this

error when running vagrant up

Environment:
Windows 10 Pro
OracleBox - Version 7.0.2 r154219 (Qt5.15.2)
vagrant version
Installed Version: 2.3.2
Latest Version: 2.3.4

After running vagrant up I get this error:

master: No VM guests are running outdated hypervisor (qemu) binaries on this host.
master: + sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
master: curl: (22) The requested URL returned error: 500

The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

I can see the errors is happening here https://github.com/techiescamp/vagrant-kubeadm-kubernetes/blob/main/scripts/common.sh#L69
I tried many different combinations but keep on getting an error.

I tried this https://stackoverflow.com/questions/49582490/gpg-error-http-packages-cloud-google-com-apt-expkeysig-3746c208a7317b0f

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.