GithubHelp home page GithubHelp logo

kubealex / libvirt-k8s-provisioner Goto Github PK

View Code? Open in Web Editor NEW
423.0 22.0 76.0 500 KB

Automate your k8s installation

License: MIT License

HCL 54.99% Makefile 2.14% Jinja 42.87%
k8s k8s-cluster kubernetes kubernetes-setup kubectl kubeadm rancher nginx calico flannel

libvirt-k8s-provisioner's Introduction

License: MIT

libvirt-k8s-provisioner - Automate your cluster provisioning from 0 to k8s!

Welcome to the home of the project!

With this project, you can build up in minutes a fully working k8s cluster (single master/HA) with as many worker nodes as you want.

DISCLAIMER

It is a hobby project, so it's not supported for production usage, but feel free to open issues and/or contributing to it!

How does it work?

Kubernetes version that is installed can be choosen between:

  • 1.30 - Latest 1.30 release (1.30.0)
  • 1.29 - Latest 1.29 release (1.29.4)
  • 1.28 - Latest 1.28 release (1.28.9)
  • 1.27 - Latest 1.27 release (1.27.14)

Terraform will take care of the provisioning via terraform of:

  • Loadbalancer machine with haproxy installed and configured for HA clusters
  • k8s Master(s) VM(s)
  • k8s Worker(s) VM(s)

It also takes care of preparing the host machine with needed packages, configuring:

You can customize the setup choosing:

  • container runtime that you want to use (cri-o, containerd).
  • schedulable master if you want to schedule on your master nodes or leave the taint.
  • service CIDR to be used during installation.
  • pod CIDR to be used during installation.
  • network plugin to be used, based on the documentation. Project Calico Flannel Project Cilium
  • additional SANS to be added to api-server
  • nginx-ingress-controller, haproxy-ingress-controller or Project Contour if you want to enable ingress management.
  • metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
  • Rook-Ceph - To manage persistent storage, also configurable with single storage node.

All VMs are specular,prepared with:

The user is capable of logging via SSH too.

Quickstart

The playbook is meant to be ran against a local host or a remote host that has access to subnets that will be created, defined under vm_host group, depending on how many clusters you want to configure at once.

First of all, you need to install required collections to get started:

ansible-galaxy collection install -r requirements.yml

Once the collections are installed, you can simply run the playbook:

ansible-playbook main.yml

You can quickly make it work by configuring the needed vars, but you can go straight with the defaults!

You can also install your cluster using the Makefile with:

To install collections:

make setup

To install the cluster:

make create

Quickstart with Execution Environment

The playbooks are compatible with the newly introduced Execution environments (EE). To use them with an execution environment you need to have ansible-builder and ansible-navigator installed.

Build EE image

To build the EE image, jump in the execution-environment folder and run the build:

ansible-builder build -f execution-environment/execution-environment.yml -t k8s-ee

Run playbooks

To run the playbooks use ansible navigator:

ansible-navigator run main.yml -m stdout

Recommended sizing

Recommended sizings are:

Role vCPU RAM
master 2 2G
worker 2 2G

vars/k8s_cluster.yml

# General configuration

    k8s:
      cluster_name: k8s-test
      cluster_os: Ubuntu
      cluster_version: 1.24
      container_runtime: crio
      master_schedulable: false

    # Nodes configuration

      control_plane:
        vcpu: 2
        mem: 2
        vms: 3
        disk: 30

      worker_nodes:
        vcpu: 2
        mem: 2
        vms: 1
        disk: 30

    # Network configuration

      network:
        network_cidr: 192.168.200.0/24
        domain: k8s.test
        additional_san: ""
        pod_cidr: 10.20.0.0/16
        service_cidr: 10.110.0.0/16
        cni_plugin: cilium

    rook_ceph:
      install_rook: false
      volume_size: 50
          rook_cluster_size: 1

    # Ingress controller configuration [nginx/haproxy]

    ingress_controller:
      install_ingress_controller: true
      type: haproxy
          node_port:
            http: 31080
            https: 31443

    # Section for metalLB setup

    metallb:
      install_metallb: false
      l2:
      iprange: 192.168.200.210-192.168.200.250

Size for disk and mem is in GB. disk allows to provision space in the cloud image for pod's ephemeral storage.

cluster_version can be 1.24, 1.25, 1.28 to install the corresponding latest version for the release

VMS are created with these names by default (customizing them is work in progress):

- **cluster_name**-loadbalancer.**domain**
- **cluster_name**-master-N.**domain**
- **cluster_name**-worker-N.**domain**

It is possible to choose CentOS/Ubuntu as kubernetes hosts OS

Multiple clusters - Thanks to @3rd-st-ninja for the input

Since last release, it is now possible to provision multiple clusters on the same host. Each cluster will be self consistent and will have its own folder under the //home/user/k8ssetup/clusters folder in playbook root folder.

    clusters
    └── k8s-provisioner
    	├── admin.kubeconfig
    	├── haproxy.cfg
    	├── id_rsa
    	├── id_rsa.pub
    	├── libvirt-resources
    	│   ├── libvirt-resources.tf
    	│   └── terraform.tfstate
    	├── loadbalancer
    	│   ├── cloud_init.cfg
    	│   ├── k8s-loadbalancer.tf
    	│   └── terraform.tfstate
    	├── masters
    	│   ├── cloud_init.cfg
    	│   ├── k8s-master.tf
    	│   └── terraform.tfstate
    	├── workers
    	│   ├── cloud_init.cfg
    	│   ├── k8s-workers.tf
    	│   └── terraform.tfstate
    	└── workers-rook
    	    ├── cloud_init.cfg
    	    └── k8s-workers.tf

In the main folder will be provided a custom script for removing the single cluster, without touching others.

k8s-provisioner-cleanup-playbook.yml

As well as a separated inventory for each cluster:

k8s-provisioner-inventory-k8s

In order to keep clusters separated, ensure that you use a different k8s.cluster_name,k8s.network.domain and k8s.network.network_cidr variables.

Rook

Rook setup actually creates a dedicated kind of worker, with an additional volume on the VMs that are required. Now it is possible to select the size of Rook cluster using rook_ceph.rook_cluster_size variable in the settings.

MetalLB

Basic setup taken from the documentation. At the moment, the parameter l2 reports the IPs that can be used (defaults to some IPs in the same subnet of the hosts) as 'external' IPs for accessing the applications

Suggestion and improvements are highly recommended! Alex

libvirt-k8s-provisioner's People

Contributors

kubealex avatar larsjohnsen avatar sys-ops avatar teoscol85 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libvirt-k8s-provisioner's Issues

Any idea why this is happening sometimes?

TASK [Join control-plane nodes in cluster] ******************************************************************************************************************************************************************************************
fatal: [k8s-venom-master-1.k8s.lab]: FAILED! => {"changed": true, "cmd": ["kubeadm", "join", "--config", "/tmp/kubeadm-join.yaml"], "delta": "0:00:53.646320", "end": "2023-09-25 12:24:52.184083", "msg": "non-zero return code", "rc": 1, "start": "2023-09-25 12:23:58.537763", "stderr": "W0925 12:24:18.215730    2942 checks.go:835] detected that the sandbox image \"registry.k8s.io/pause:3.6\" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using \"registry.k8s.io/pause:3.9\" as the CRI sandbox image.\nerror execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Get \"https://k8s-venom-loadbalancer.k8s.lab:6443/api/v1/namespaces/kube-system/secrets/kubeadm-certs?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["W0925 12:24:18.215730    2942 checks.go:835] detected that the sandbox image \"registry.k8s.io/pause:3.6\" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using \"registry.k8s.io/pause:3.9\" as the CRI sandbox image.", "error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Get \"https://k8s-venom-loadbalancer.k8s.lab:6443/api/v1/namespaces/kube-system/secrets/kubeadm-certs?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[preflight] Running pre-flight checks before initializing the new control plane instance\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace", "stdout_lines": ["[preflight] Running pre-flight checks", "[preflight] Reading configuration from the cluster...", "[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'", "[preflight] Running pre-flight checks before initializing the new control plane instance", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "[download-certs] Downloading the certificates in Secret \"kubeadm-certs\" in the \"kube-system\" Namespace"]}

I think the interesting part is this: 
`detected that the sandbox image \"registry.k8s.io/pause:3.6\" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using \"registry.k8s.io/pause:3.9\" as the CRI sandbox image.", `

[Ubuntu 20.04] Setup fails when provisioning VMs due to DNS error

My setup: base vanilla server with just needed packaged installed

vars/k8s_cluster.yml changes: container runtime containerd and kube version 1.21

Error

PLAY [Check connection and set facts] **********************************************

TASK [Wait 600 seconds for target connection to become reachable/usable] ***********
fatal: [k8s-test-master-0.k8s.test]: FAILED! => {"changed": false, "elapsed": 600, "msg": "timed out waiting for ping module test: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-test-master-0.k8s.test: Name or service not known"}
fatal: [k8s-test-master-2.k8s.test]: FAILED! => {"changed": false, "elapsed": 600, "msg": "timed out waiting for ping module test: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-test-master-2.k8s.test: Name or service not known"}
fatal: [k8s-test-worker-0.k8s.test]: FAILED! => {"changed": false, "elapsed": 600, "msg": "timed out waiting for ping module test: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-test-worker-0.k8s.test: Name or service not known"}
fatal: [k8s-test-master-1.k8s.test]: FAILED! => {"changed": false, "elapsed": 600, "msg": "timed out waiting for ping module test: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-test-master-1.k8s.test: Name or service not known"}

PLAY RECAP *************************************************************************
k8s-test-master-0.k8s.test : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
k8s-test-master-1.k8s.test : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
k8s-test-master-2.k8s.test : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
k8s-test-worker-0.k8s.test : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
localhost                  : ok=36   changed=24   unreachable=0    failed=0    skipped=16   rescued=0    ignored=0

make: *** [Makefile:13: create] Error 2

DNS resolution from host is in fact broken
DNS resolution from inside the vms works

More Infos

virsh list --all
 Id   Name                State
-----------------------------------
 1    k8s-test-master-2   running
 2    k8s-test-master-1   running
 3    k8s-test-master-0   running
 4    k8s-test-worker-0   running
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:b6:34:b9    ipv4         192.168.200.143/24

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet2      52:54:00:f6:86:8c    ipv4         192.168.200.152/24

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet1      52:54:00:b2:12:c5    ipv4         192.168.200.65/24

 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet3      52:54:00:1e:93:48    ipv4         192.168.200.81/24
dig k8s-test-master-0.k8s.test

; <<>> DiG 9.16.1-Ubuntu <<>> k8s-test-master-0.k8s.test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 40048
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;k8s-test-master-0.k8s.test.	IN	A

;; AUTHORITY SECTION:
.			42	IN	SOA	a.root-servers.net. nstld.verisign-grs.com. 2021101200 1800 900 604800 86400

;; Query time: 3 msec
;; SERVER: 147.75.207.207#53(147.75.207.207)
;; WHEN: Tue Oct 12 09:09:57 UTC 2021
;; MSG SIZE  rcvd: 130

resolvectl query k8s-test-master-0.k8s.test
k8s-test-master-0.k8s.test: 192.168.200.65     -- link: k8s-test

-- Information acquired via protocol DNS in 1.0ms.
-- Data is authenticated: no

Provisioner creates first master vm in a powered off state and promptly fails

Hello again. I'm running into the below issue when trying to deploy a 3-node test cluster. Based on some searching, it seems it could be an issue of sorts with with the libvirt terraform provider but I'm really not sure. Hoping I can get pointed in the right direction again.

The vm test-master-0 gets created and in a powered off state but it throws the following error:

TASK [community.general.terraform] ************************************************************************************************************************************************************************************************************************************
task path: /...libvirt-k8s-provisioner/04_provisioning_vms.yml:26

...

fatal: [r610a]: FAILED! => {
    "changed": false,
    "cmd": "/usr/bin/terraform apply -no-color -input=false -auto-approve -lock=true /tmp/tmpFLko37.tfplan",
    "invocation": {
        "module_args": {
            "backend_config": null,
            "backend_config_files": null,
            "binary_path": null,
            "check_destroy": false,
            "force_init": true,
            "init_reconfigure": false,
            "lock": true,
            "lock_timeout": null,
            "overwrite_init": true,
            "plan_file": null,
            "plugin_paths": null,
            "project_path": "/data/admin/vm/terraform/files/masters",
            "purge_workspace": false,
            "state": "present",
            "state_file": null,
            "targets": [],
            "variables": {
                "cpu": "2",
                "disk_size": "30",
                "domain": "blah.net",
                "hostname": "test-master",
                "libvirt_network": "test",
                "libvirt_pool": "test",
                "memory": "2",
                "os": "ubuntu",
                "os_image_name": "OS-GenericCloud.qcow2",
                "vm_count": "1"
            },
            "variables_files": null,
            "workspace": "default"
        }
    },
    "msg": "\nError: Error defining libvirt domain: operation failed: domain 'test-master-0' already exists with uuid 4c3bc5b8-cbb4-49af-90dc-ac45e200070a\n\n  with libvirt_domain.k8s-master[0],\n  on k8s-master.tf line 51, in resource \"libvirt_domain\" \"k8s-master\":\n  51: resource \"libvirt_domain\" \"k8s-master\" {",
    "rc": 1,
    "stderr": "\nError: Error defining libvirt domain: operation failed: domain 'test-master-0' already exists with uuid 4c3bc5b8-cbb4-49af-90dc-ac45e200070a\n\n  with libvirt_domain.k8s-master[0],\n  on k8s-master.tf line 51, in resource \"libvirt_domain\" \"k8s-master\":\n  51: resource \"libvirt_domain\" \"k8s-master\" {\n\n",
    "stderr_lines": [
        "",
        "Error: Error defining libvirt domain: operation failed: domain 'test-master-0' already exists with uuid 4c3bc5b8-cbb4-49af-90dc-ac45e200070a",
        "",
        "  with libvirt_domain.k8s-master[0],",
        "  on k8s-master.tf line 51, in resource \"libvirt_domain\" \"k8s-master\":",
        "  51: resource \"libvirt_domain\" \"k8s-master\" {",
        ""
    ],
    "stdout": "libvirt_domain.k8s-master[0]: Creating...\n",
    "stdout_lines": [
        "libvirt_domain.k8s-master[0]: Creating..."
    ]
}

virsh list --all

# virsh list --all
 Id   Name            State
--------------------------------
 1    git             running
 -    test-master-0   shut off

virsh dominfo test-master-0

# virsh dominfo test-master-0
Id:             -
Name:           test-master-0
UUID:           4c3bc5b8-cbb4-49af-90dc-ac45e200070a
OS Type:        hvm
State:          shut off
CPU(s):         2
Max memory:     2097152 KiB
Used memory:    2097152 KiB
Persistent:     yes
Autostart:      enable
Managed save:   no
Security model: none
Security DOI:   0

The line in the terraform code it points to is:

resource "libvirt_domain" "k8s-master" {
  autostart = true
  count= var.vm_count
  name = "${var.hostname}-${count.index}"
  memory = var.memory*1024
  vcpu = var.cpu

  disk {
     volume_id = libvirt_volume.os_image_resized[count.index].id
  }

  network_interface {
       network_name = var.libvirt_network
  }

  cloudinit = libvirt_cloudinit_disk.commoninit[count.index].id

  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }

  graphics {
    type = "spice"
    listen_type = "address"
    autoport = "true"
  }
}

libvert issue

"msg": "\nError: error creating libvirt domain: operation failed: unable to find any master var store for loader: /usr/share/edk2/ovmf/OVMF_CODE.fd\n\n with module.master_nodes.libvirt_domain.service-vm[0],\n on .terraform/modules/master_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource "libvirt_domain" "service-vm":\n 47: resource "libvirt_domain" "service-vm" {\n\n\nError: error creating libvirt domain: operation failed: unable to find any master var store for loader: /usr/share/edk2/ovmf/OVMF_CODE.fd\n\n with module.worker_nodes.libvirt_domain.service-vm[0],\n on .terraform/modules/worker_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource "libvirt_domain" "service-vm":\n 47: resource "libvirt_domain" "service-vm" {",
"rc": 1,
"stderr": "\nError: error creating libvirt domain: operation failed: unable to find any master var store for loader: /usr/share/edk2/ovmf/OVMF_CODE.fd\n\n with module.master_nodes.libvirt_domain.service-vm[0],\n on .terraform/modules/master_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource "libvirt_domain" "service-vm":\n 47: resource "libvirt_domain" "service-vm" {\n\n\nError: error creating libvirt domain: operation failed: unable to find any master var store for loader: /usr/share/edk2/ovmf/OVMF_CODE.fd\n\n with module.worker_nodes.libvirt_domain.service-vm[0],\n on .terraform/modules/worker_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource "libvirt_domain" "service-vm":\n 47: resource "libvirt_domain" "service-vm" {\n\n",
"stderr_lines": [
"",
"Error: error creating libvirt domain: operation failed: unable to find any master var store for loader: /usr/share/edk2/ovmf/OVMF_CODE.fd",
"",
" with module.master_nodes.libvirt_domain.service-vm[0],",
" on .terraform/modules/master_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource "libvirt_domain" "service-vm":",
" 47: resource "libvirt_domain" "service-vm" {",
"",
"",
"Error: error creating libvirt domain: operation failed: unable to find any master var store for loader: /usr/share/edk2/ovmf/OVMF_CODE.fd",
"",
" with module.worker_nodes.libvirt_domain.service-vm[0],",
" on .terraform/modules/worker_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource "libvirt_domain" "service-vm":",
" 47: resource "libvirt_domain" "service-vm" {",
""
],
"stdout": "module.libvirt_pool.libvirt_pool.vm_pool: Creating...\nmodule.libvirt_network.libvirt_network.vm_network: Creating...\nmodule.libvirt_pool.libvirt_pool.vm_pool: Creation complete after 5s [id=71027022-1700-4007-9dc4-0d2d17c23e2e]\nmodule.libvirt_network.libvirt_network.vm_network: Creation complete after 5s [id=a1f12fab-2aca-4b4b-8fc2-861ec807a64c]\nmodule.master_nodes.data.template_file.user_data[0]: Reading...\nmodule.worker_nodes.data.template_file.user_data[0]: Reading...\nmodule.master_nodes.libvirt_volume.os_image[0]: Creating...\nmodule.worker_nodes.libvirt_volume.os_image[0]: Creating...\nmodule.worker_nodes.data.template_file.user_data[0]: Read complete after 0s [id=34f0436f48849fa59eb76b98d30720376f110444df95237e3adcc96c7f0d8d52]\nmodule.master_nodes.data.template_file.user_data[0]: Read complete after 0s [id=d12f168eb9e443c541d4177bb394b0294ce1dca493cc2131735408e6ccfa06d2]\nmodule.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...\nmodule.master_nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...\nmodule.master_nodes.libvirt_volume.os_image[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0-os_image]\nmodule.master_nodes.libvirt_volume.os_disk[0]: Creating...\nmodule.worker_nodes.libvirt_volume.os_image[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-os_image]\nmodule.worker_nodes.libvirt_volume.os_disk[0]: Creating...\nmodule.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-commoninit.iso;77011195-611e-4f46-bc26-54ade61e309a]\nmodule.master_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0-commoninit.iso;2de59958-e79e-41bd-acac-5899cf09e398]\nmodule.master_nodes.libvirt_volume.os_disk[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0]\nmodule.master_nodes.libvirt_domain.service-vm[0]: Creating...\nmodule.worker_nodes.libvirt_volume.os_disk[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0]\nmodule.worker_nodes.libvirt_domain.service-vm[0]: Creating...\n",
"stdout_lines": [
"module.libvirt_pool.libvirt_pool.vm_pool: Creating...",
"module.libvirt_network.libvirt_network.vm_network: Creating...",
"module.libvirt_pool.libvirt_pool.vm_pool: Creation complete after 5s [id=71027022-1700-4007-9dc4-0d2d17c23e2e]",
"module.libvirt_network.libvirt_network.vm_network: Creation complete after 5s [id=a1f12fab-2aca-4b4b-8fc2-861ec807a64c]",
"module.master_nodes.data.template_file.user_data[0]: Reading...",
"module.worker_nodes.data.template_file.user_data[0]: Reading...",
"module.master_nodes.libvirt_volume.os_image[0]: Creating...",
"module.worker_nodes.libvirt_volume.os_image[0]: Creating...",
"module.worker_nodes.data.template_file.user_data[0]: Read complete after 0s [id=34f0436f48849fa59eb76b98d30720376f110444df95237e3adcc96c7f0d8d52]",
"module.master_nodes.data.template_file.user_data[0]: Read complete after 0s [id=d12f168eb9e443c541d4177bb394b0294ce1dca493cc2131735408e6ccfa06d2]",
"module.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...",
"module.master_nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...",
"module.master_nodes.libvirt_volume.os_image[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0-os_image]",
"module.master_nodes.libvirt_volume.os_disk[0]: Creating...",
"module.worker_nodes.libvirt_volume.os_image[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-os_image]",
"module.worker_nodes.libvirt_volume.os_disk[0]: Creating...",
"module.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-commoninit.iso;77011195-611e-4f46-bc26-54ade61e309a]",
"module.master_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0-commoninit.iso;2de59958-e79e-41bd-acac-5899cf09e398]",
"module.master_nodes.libvirt_volume.os_disk[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0]",
"module.master_nodes.libvirt_domain.service-vm[0]: Creating...",
"module.worker_nodes.libvirt_volume.os_disk[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0]",
"module.worker_nodes.libvirt_domain.service-vm[0]: Creating..."
]
}

missing prereq ssh key

TASK [terraform] *****************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Terraform plan could not be created\r\nSTDOUT: Refreshing Terraform state in-memory prior to plan...\nThe refreshed state will be used to calculate this plan, but will not be\npersisted to local or remote state storage.\n\ndata.template_file.meta_data[0]: Refreshing state...\ndata.template_file.meta_data[1]: Refreshing state...\ndata.template_file.user_data[1]: Refreshing state...\ndata.template_file.user_data[0]: Refreshing state...\n\r\n\r\nSTDERR: \nError: failed to render : <template_file>:13,11-16: Error in function call; Call to function \"file\" failed: no file exists at ../../../id_rsa.pub.\n\n  on k8s-master.tf line 45, in data \"template_file\" \"user_data\":\n  45: data \"template_file\" \"user_data\" {\n\n\n\nError: failed to render : <template_file>:13,11-16: Error in function call; Call to function \"file\" failed: no file exists at ../../../id_rsa.pub.\n\n  on k8s-master.tf line 45, in data \"template_file\" \"user_data\":\n  45: data \"template_file\" \"user_data\" {\n\n\n"}

Missing dependency `xsltproc` for Ubuntu Cloud 23.10

Hi,

Using the current cloud release of Ubuntu 23.10, task "Ensure control plane VMs are in place" fails due to missing dependency xsltproc. Can be fixed manually by running sudo apt install xsltproc.

PLAY [This play provisions k8s VMs based on intial config] ****************************************************************************************************************************************************************************************************
                                                               
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************
ok: [localhost]                                                                                                                                                                                                                                                
                                                               
TASK [Enumerate Nodes] ****************************************************************************************************************************************************************************************************************************************
ok: [localhost]                                                                                                                                                                                                                                                
                                                               
TASK [Ensure control plane VMs are in place] ******************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/terraform apply -no-color -input=false -auto-approve -lock=true /tmp/tmpjbeg8ld8.tfplan", "msg": "\nError: error applying XSLT stylesheet: exec: \"xsltproc\": executable file not found in 
$PATH\n\n  with module.master_nodes.libvirt_domain.service-vm[0],\n  on .terraform/modules/master_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource \"libvirt_domain\" \"service-vm\":\n  47: resource \"libvirt_domain\" \"service-vm\" {\
n\n\nError: error applying XSLT stylesheet: exec: \"xsltproc\": executable file not found in $PATH\n\n  with module.worker_nodes.libvirt_domain.service-vm[0],\n  on .terraform/modules/worker_nodes/modules/terraform-libvirt-instance/main.tf line 47, in res
ource \"libvirt_domain\" \"service-vm\":\n  47: resource \"libvirt_domain\" \"service-vm\" {", "rc": 1, "stderr": "\nError: error applying XSLT stylesheet: exec: \"xsltproc\": executable file not found in $PATH\n\n  with module.master_nodes.libvirt_domain
.service-vm[0],\n  on .terraform/modules/master_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource \"libvirt_domain\" \"service-vm\":\n  47: resource \"libvirt_domain\" \"service-vm\" {\n\n\nError: error applying XSLT stylesheet: exec: 
\"xsltproc\": executable file not found in $PATH\n\n  with module.worker_nodes.libvirt_domain.service-vm[0],\n  on .terraform/modules/worker_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource \"libvirt_domain\" \"service-vm\":\n  47: re
source \"libvirt_domain\" \"service-vm\" {\n\n", "stderr_lines": ["", "Error: error applying XSLT stylesheet: exec: \"xsltproc\": executable file not found in $PATH", "", "  with module.master_nodes.libvirt_domain.service-vm[0],", "  on .terraform/modules
/master_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource \"libvirt_domain\" \"service-vm\":", "  47: resource \"libvirt_domain\" \"service-vm\" {", "", "", "Error: error applying XSLT stylesheet: exec: \"xsltproc\": executable file no
t found in $PATH", "", "  with module.worker_nodes.libvirt_domain.service-vm[0],", "  on .terraform/modules/worker_nodes/modules/terraform-libvirt-instance/main.tf line 47, in resource \"libvirt_domain\" \"service-vm\":", "  47: resource \"libvirt_domain\
" \"service-vm\" {", ""], "stdout": "module.libvirt_pool.libvirt_pool.vm_pool: Creating...\nmodule.libvirt_network.libvirt_network.vm_network: Creating...\nmodule.libvirt_pool.libvirt_pool.vm_pool: Creation complete after 5s [id=57d32f16-508e-4ec6-9e13-8f
877c5ee1cc]\nmodule.libvirt_network.libvirt_network.vm_network: Creation complete after 5s [id=d8c1e35f-dd18-4fd4-b727-3b581919d99f]\nmodule.worker_nodes.libvirt_volume.os_image[0]: Creating...\nmodule.master_nodes.libvirt_volume.os_image[0]: Creating...\
nmodule.worker_nodes.data.template_file.user_data[0]: Reading...\nmodule.master_nodes.data.template_file.user_data[0]: Reading...\nmodule.master_nodes.data.template_file.user_data[0]: Read complete after 0s [id=97a93ca0149ddcfb61c0e6f232a592298c0a416d063b
f9d86e138ed730977c55]\nmodule.worker_nodes.data.template_file.user_data[0]: Read complete after 0s [id=2594d6947a563f1d3c9d3b63f155ace2c7c7c2ea1f5d3f27ae6836c812e4e3a1]\nmodule.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...\nmodule.master_
nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...\nmodule.worker_nodes.libvirt_volume.os_image[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-os_image]\nmodule.worker_nodes.libvirt_volume.os_disk[0]: Creating..
.\nmodule.master_nodes.libvirt_volume.os_image[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0-os_image]\nmodule.master_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/image
s/k8s-test/k8s-test-master-0-commoninit.iso;7b8d2b9e-13fe-44f6-81df-c59729eaa9e8]\nmodule.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-commoninit.iso;16460142-17aa-440
4-99e3-059ae7385e7a]\nmodule.master_nodes.libvirt_volume.os_disk[0]: Creating...\nmodule.worker_nodes.libvirt_volume.os_disk[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0]\nmodule.worker_nodes.libvirt_domain.service
-vm[0]: Creating...\nmodule.master_nodes.libvirt_volume.os_disk[0]: Creation complete after 0s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0]\nmodule.master_nodes.libvirt_domain.service-vm[0]: Creating...\n", "stdout_lines": ["module.libvirt_pool
.libvirt_pool.vm_pool: Creating...", "module.libvirt_network.libvirt_network.vm_network: Creating...", "module.libvirt_pool.libvirt_pool.vm_pool: Creation complete after 5s [id=57d32f16-508e-4ec6-9e13-8f877c5ee1cc]", "module.libvirt_network.libvirt_networ
k.vm_network: Creation complete after 5s [id=d8c1e35f-dd18-4fd4-b727-3b581919d99f]", "module.worker_nodes.libvirt_volume.os_image[0]: Creating...", "module.master_nodes.libvirt_volume.os_image[0]: Creating...", "module.worker_nodes.data.template_file.user
_data[0]: Reading...", "module.master_nodes.data.template_file.user_data[0]: Reading...", "module.master_nodes.data.template_file.user_data[0]: Read complete after 0s [id=97a93ca0149ddcfb61c0e6f232a592298c0a416d063bf9d86e138ed730977c55]", "module.worker_n
odes.data.template_file.user_data[0]: Read complete after 0s [id=2594d6947a563f1d3c9d3b63f155ace2c7c7c2ea1f5d3f27ae6836c812e4e3a1]", "module.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creating...", "module.master_nodes.libvirt_cloudinit_disk.commo
ninit[0]: Creating...", "module.worker_nodes.libvirt_volume.os_image[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-os_image]", "module.worker_nodes.libvirt_volume.os_disk[0]: Creating...", "module.master_nodes.libvi
rt_volume.os_image[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0-os_image]", "module.master_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master
-0-commoninit.iso;7b8d2b9e-13fe-44f6-81df-c59729eaa9e8]", "module.worker_nodes.libvirt_cloudinit_disk.commoninit[0]: Creation complete after 2s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0-commoninit.iso;16460142-17aa-4404-99e3-059ae7385e7a]", "
module.master_nodes.libvirt_volume.os_disk[0]: Creating...", "module.worker_nodes.libvirt_volume.os_disk[0]: Creation complete after 1s [id=/var/lib/libvirt/images/k8s-test/k8s-test-worker-0]", "module.worker_nodes.libvirt_domain.service-vm[0]: Creating..
.", "module.master_nodes.libvirt_volume.os_disk[0]: Creation complete after 0s [id=/var/lib/libvirt/images/k8s-test/k8s-test-master-0]", "module.master_nodes.libvirt_domain.service-vm[0]: Creating..."]}                                                     
                                                               

License

Hi @kubealex , have you considered putting an opensource license on this code base? I would propose MIT or Apache.

Loadbalancer firewall configuration does not persist through reboot

The configuration of the firewall in k8s-loadbalancer is not persisted through a reboot, I suspect this describes why:
https://access.redhat.com/discussions/1455033

Following a reboot I observe that eth0 is no longer present in the internal zone:

internal
  target: default
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client http https mdns samba-client ssh
  ports: 6443/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Facing issue while installing requirements and playbook

Hi

I tried to follow steps in repo. When I did 'ansible-galaxy collection install -r requirements.yml' it did not installed anything also not showed any error. When I tried to run 'ansible-playbook main.yml' it gives following error

ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in '/root/libvirt-k8s-provisioner/00_pre_flight_checklist.yml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

tasks:
- name: Retrieve the minor version
^ here

Following is my ansible version
root@k8s-master:~/libvirt-k8s-provisioner# ansible --version
ansible 2.5.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Jul 1 2022, 15:56:32) [GCC 7.5.0]

Can you please help me on this? Also please provide pre-requisite doc for same if possible.

Thanks,
Akshay

Failing to create test-master-0-commoninit.iso

When trying to provision a small cluster on a remote server, it fails when trying to create test-master-0-commoninit.iso. It successfully creates the first two image files in the location. I've confirmed multiple times that there are no permissions issues on that path (it's an nfs path). I'm not an ansible or terraform ninja and haven't had luck googling for information. There's a bunch of info below but I'm happy to provide any additional info anyone may need to help me solve this.

/data/admin/vm/vm_storage/test$ ls
test-master-os_image-0  test-master-os_image_resized-0
TASK [community.general.terraform] ********************************************************************************************************************************************************************************************************************************************
    "msg": "\nError: Error creating libvirt volume for cloudinit device test-master-0-commoninit.iso: Failed to create file '/data/admin/vm/vm_storage/test/test-master-0-commoninit.iso': Operation not permitted\n\n  with libvirt_cloudinit_disk.commoninit[0],\n  on k8s-master.tf line 33, in resource \"libvirt_cloudinit_disk\" \"commoninit\":\n  33: resource \"libvirt_cloudinit_disk\" \"commoninit\" {",
    "rc": 1,
    "stderr": "\nError: Error creating libvirt volume for cloudinit device test-master-0-commoninit.iso: Failed to create file '/data/admin/vm/vm_storage/test/test-master-0-commoninit.iso': Operation not permitted\n\n  with libvirt_cloudinit_disk.commoninit[0],\n  on k8s-master.tf line 33, in resource \"libvirt_cloudinit_disk\" \"commoninit\":\n  33: resource \"libvirt_cloudinit_disk\" \"commoninit\" {\n\n",
    "stderr_lines": [
        "",
        "Error: Error creating libvirt volume for cloudinit device test-master-0-commoninit.iso: Failed to create file '/data/admin/vm/vm_storage/test/test-master-0-commoninit.iso': Operation not permitted",
        "",
        "  with libvirt_cloudinit_disk.commoninit[0],",
        "  on k8s-master.tf line 33, in resource \"libvirt_cloudinit_disk\" \"commoninit\":",
        "  33: resource \"libvirt_cloudinit_disk\" \"commoninit\" {",
        ""
    ],
    "stdout": "libvirt_cloudinit_disk.commoninit[0]: Creating...\n",
    "stdout_lines": [
        "libvirt_cloudinit_disk.commoninit[0]: Creating..."
    ]

Ansible version

$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]. This feature will be removed from ansible-core in version 2.12. Deprecation warnings can be
 disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.3] 

Terraform version

$ terraform --version
Terraform v1.0.5

You forgot to update kubeadm-config.yaml.j2

I see that you have updated to the latest patch releases for all Kubernetes major versions. However, you forgot to also change those values in kubeadm-config.yaml.j2.

kubernetesVersion: v{{ k8s.cluster_version }}.{{ '12' if k8s.cluster_version is version('1.24', '==') else '8' if k8s.cluster_version is version('1.25', '==') else '3' if k8s.cluster_version is version('1.26', '==') else '0' if k8s.cluster_version is version('1.27', '==') }}

yum lockfile is held by another process

I often see the following failure when executing the yum tasks:

TASK [Install packages] **************************************************************************************************************************
fatal: [k8s-loadbalancer.k8s.lab]: FAILED! => {"changed": false, "msg": "yum lockfile is held by another process"}

This can be resolved by adding lock_timeout to the tasks.

fatal: [localhost]: FAILED! => {"changed": false, "command": "/usr/local/bin/terraform apply

Caught one more.

TASK [terraform] *******************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "command": "/usr/local/bin/terraform apply -no-color -input=false -auto-approve=true -lock=true /tmp/tmpekvbdoti.tfplan", "msg": "Failure when executing Terraform command. Exited 1.\nstdout: libvirt_domain.k8s-master[1]: Creating...\nlibvirt_domain.k8s-master[0]: Creating...\nlibvirt_domain.k8s-master[2]: Creating...\n\nstderr: \nError: Error defining libvirt domain: virError(Code=9, Domain=20, Message='operation failed: domain 'k8s-test-master-1' already exists with uuid d4a59107-7719-4beb-84ef-349ff7fabf47')\n\n  on k8s-master.tf line 68, in resource \"libvirt_domain\" \"k8s-master\":\n  68: resource \"libvirt_domain\" \"k8s-master\" {\n\n\n\nError: Error defining libvirt domain: virError(Code=9, Domain=20, Message='operation failed: domain 'k8s-test-master-0' already exists with uuid 70a8ecb8-5c0a-4ea0-b151-63e650054a7a')\n\n  on k8s-master.tf line 68, in resource \"libvirt_domain\" \"k8s-master\":\n  68: resource \"libvirt_domain\" \"k8s-master\" {\n\n\n\nError: Error defining libvirt domain: virError(Code=9, Domain=20, Message='operation failed: domain 'k8s-test-master-2' already exists with uuid fcbcf5dc-dc02-4714-aed1-e9b796a7c538')\n\n  on k8s-master.tf line 68, in resource \"libvirt_domain\" \"k8s-master\":\n  68: resource \"libvirt_domain\" \"k8s-master\" {\n\n\n"}

Feature request: Support deploying multiple clusters to the same host

Requesting support to deploy multiple clusters to the same KVM host. My thoughts on how this could be accomplished:

  1. Terraform state files are generated in the files/terraform/* directories for each playbook. When deploying a second cluster, terraform wants to destroy the existing cluster. A way around this - assume a cluster name is test1. Ansible creates a dir clusterconfigs/test1 and copies terraform/files/* there then uses that path to deploy the test1 cluster. All the state files for each cluster would be in their own sub dir and would prevent conflicts when creating additional clusters.

  2. dnsmasq files placed into /etc/NetworkManager/dnsmasq.q/ and /etc/NetworkManager/conf.d/ would have to be cluster-specific and avoid conflicting if deploying multiple clusters in the same domain (e.g. named libvirt_clustername-domainname_dnsmasq.conf and localdns-clustername-domainname.conf respectively. Though I'm not 100% certain about the localdns-* configs). The contents of the dnsmasq config file would have to be something like server=/clustername.domainname/networkcidr to avoid conflicts with existing clusters in the same domain as well as preventing the kvm host from conflicting with the domain's authoritative name server if one exists.

proxmox configs

hi,

Where's the proxmox config for username/password or ssh key?

missing gcc prereq

TASK [Run 'install' target on libivrt-provider] **********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/gmake install", "msg": "go: downloading github.com/hashicorp/terraform-plugin-sdk v1.4.0\ngo: downloading github.com/davecgh/go-spew v1.1.1\ngo: downloading github.com/mitchellh/packer v1.3.2\ngo: downloading github.com/libvirt/libvirt-go v5.0.0+incompatible\ngo: downloading github.com/libvirt/libvirt-go-xml v5.0.0+incompatible\ngo: downloading github.com/hooklift/iso9660 v1.0.0\ngo: downloading github.com/c4milo/gotoolkit v0.0.0-20170704181456-e37eeabad07e\ngo: downloading github.com/zclconf/go-cty v1.1.1\ngo: downloading github.com/hashicorp/errwrap v1.0.0\ngo: downloading github.com/hashicorp/go-hclog v0.10.0\ngo: downloading github.com/hashicorp/go-plugin v1.0.1\ngo: downloading google.golang.org/grpc v1.25.1\ngo: downloading github.com/hashicorp/hcl v1.0.0\ngo: downloading github.com/hashicorp/logutils v1.0.0\ngo: downloading github.com/hashicorp/hcl/v2 v2.1.0\ngo: downloading github.com/agext/levenshtein v1.2.2\ngo: downloading github.com/hashicorp/terraform-svchost v0.0.0-20191119180714-d2e4933b9136\ngo: downloading github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d\ngo: downloading github.com/hashicorp/go-version v1.2.0\ngo: downloading github.com/mitchellh/cli v1.0.0\ngo: downloading github.com/mitchellh/mapstructure v1.1.2\ngo: downloading github.com/mitchellh/go-testing-interface v1.0.0\ngo: downloading github.com/mitchellh/copystructure v1.0.0\ngo: downloading golang.org/x/text v0.3.2\ngo: downloading github.com/vmihailenco/msgpack v4.0.4+incompatible\ngo: downloading github.com/oklog/run v1.0.0\ngo: downloading github.com/golang/protobuf v1.3.2\ngo: downloading github.com/hashicorp/go-uuid v1.0.1\ngo: downloading github.com/hashicorp/go-multierror v1.0.0\ngo: downloading github.com/mitchellh/go-wordwrap v1.0.0\ngo: downloading golang.org/x/net v0.0.0-20191204025024-5ee1b9f4859a\ngo: downloading github.com/spf13/afero v1.2.2\ngo: downloading github.com/hashicorp/terraform-config-inspect v0.0.0-20191121111010-e9629612a215\ngo: downloading github.com/apparentlymart/go-textseg v1.0.0\ngo: downloading github.com/hashicorp/go-getter v1.4.0\ngo: downloading github.com/bgentry/speakeasy v0.1.0\ngo: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db\ngo: downloading github.com/armon/go-radix v1.0.0\ngo: downloading golang.org/x/crypto v0.0.0-20191202143827-86a70503ff7e\ngo: downloading golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6\ngo: downloading github.com/hashicorp/go-safetemp v1.0.0\ngo: downloading github.com/ulikunitz/xz v0.5.6\ngo: downloading github.com/google/go-cmp v0.3.1\ngo: downloading golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e\ngo: downloading github.com/posener/complete v1.2.3\ngo: downloading github.com/aws/aws-sdk-go v1.25.47\ngo: downloading github.com/mitchellh/go-homedir v1.1.0\ngo: downloading github.com/hashicorp/go-cleanhttp v0.5.1\ngo: downloading github.com/fatih/color v1.7.0\ngo: downloading github.com/mattn/go-isatty v0.0.10\ngo: downloading github.com/mattn/go-colorable v0.1.4\ngo: downloading google.golang.org/api v0.14.0\ngo: downloading github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d\ngo: downloading github.com/zclconf/go-cty-yaml v1.0.1\ngo: downloading google.golang.org/genproto v0.0.0-20191203220235-3fa9dbf08042\ngo: downloading cloud.google.com/go v0.49.0\ngo: downloading cloud.google.com/go/storage v1.4.0\ngo: downloading go.opencensus.io v0.22.2\ngo: downloading github.com/google/uuid v1.1.1\ngo: downloading github.com/googleapis/gax-go/v2 v2.0.5\ngo: downloading github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9\ngo: downloading github.com/mitchellh/reflectwalk v1.0.1\ngo: downloading github.com/apparentlymart/go-cidr v1.0.1\ngo: downloading github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af\n# github.com/libvirt/libvirt-go\nexec: \"gcc\": executable file not found in $PATH\ngmake: *** [Makefile:14: install] Error 2", "rc": 2, "stderr": "go: downloading github.com/hashicorp/terraform-plugin-sdk v1.4.0\ngo: downloading github.com/davecgh/go-spew v1.1.1\ngo: downloading github.com/mitchellh/packer v1.3.2\ngo: downloading github.com/libvirt/libvirt-go v5.0.0+incompatible\ngo: downloading github.com/libvirt/libvirt-go-xml v5.0.0+incompatible\ngo: downloading github.com/hooklift/iso9660 v1.0.0\ngo: downloading github.com/c4milo/gotoolkit v0.0.0-20170704181456-e37eeabad07e\ngo: downloading github.com/zclconf/go-cty v1.1.1\ngo: downloading github.com/hashicorp/errwrap v1.0.0\ngo: downloading github.com/hashicorp/go-hclog v0.10.0\ngo: downloading github.com/hashicorp/go-plugin v1.0.1\ngo: downloading google.golang.org/grpc v1.25.1\ngo: downloading github.com/hashicorp/hcl v1.0.0\ngo: downloading github.com/hashicorp/logutils v1.0.0\ngo: downloading github.com/hashicorp/hcl/v2 v2.1.0\ngo: downloading github.com/agext/levenshtein v1.2.2\ngo: downloading github.com/hashicorp/terraform-svchost v0.0.0-20191119180714-d2e4933b9136\ngo: downloading github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d\ngo: downloading github.com/hashicorp/go-version v1.2.0\ngo: downloading github.com/mitchellh/cli v1.0.0\ngo: downloading github.com/mitchellh/mapstructure v1.1.2\ngo: downloading github.com/mitchellh/go-testing-interface v1.0.0\ngo: downloading github.com/mitchellh/copystructure v1.0.0\ngo: downloading golang.org/x/text v0.3.2\ngo: downloading github.com/vmihailenco/msgpack v4.0.4+incompatible\ngo: downloading github.com/oklog/run v1.0.0\ngo: downloading github.com/golang/protobuf v1.3.2\ngo: downloading github.com/hashicorp/go-uuid v1.0.1\ngo: downloading github.com/hashicorp/go-multierror v1.0.0\ngo: downloading github.com/mitchellh/go-wordwrap v1.0.0\ngo: downloading golang.org/x/net v0.0.0-20191204025024-5ee1b9f4859a\ngo: downloading github.com/spf13/afero v1.2.2\ngo: downloading github.com/hashicorp/terraform-config-inspect v0.0.0-20191121111010-e9629612a215\ngo: downloading github.com/apparentlymart/go-textseg v1.0.0\ngo: downloading github.com/hashicorp/go-getter v1.4.0\ngo: downloading github.com/bgentry/speakeasy v0.1.0\ngo: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db\ngo: downloading github.com/armon/go-radix v1.0.0\ngo: downloading golang.org/x/crypto v0.0.0-20191202143827-86a70503ff7e\ngo: downloading golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6\ngo: downloading github.com/hashicorp/go-safetemp v1.0.0\ngo: downloading github.com/ulikunitz/xz v0.5.6\ngo: downloading github.com/google/go-cmp v0.3.1\ngo: downloading golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e\ngo: downloading github.com/posener/complete v1.2.3\ngo: downloading github.com/aws/aws-sdk-go v1.25.47\ngo: downloading github.com/mitchellh/go-homedir v1.1.0\ngo: downloading github.com/hashicorp/go-cleanhttp v0.5.1\ngo: downloading github.com/fatih/color v1.7.0\ngo: downloading github.com/mattn/go-isatty v0.0.10\ngo: downloading github.com/mattn/go-colorable v0.1.4\ngo: downloading google.golang.org/api v0.14.0\ngo: downloading github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d\ngo: downloading github.com/zclconf/go-cty-yaml v1.0.1\ngo: downloading google.golang.org/genproto v0.0.0-20191203220235-3fa9dbf08042\ngo: downloading cloud.google.com/go v0.49.0\ngo: downloading cloud.google.com/go/storage v1.4.0\ngo: downloading go.opencensus.io v0.22.2\ngo: downloading github.com/google/uuid v1.1.1\ngo: downloading github.com/googleapis/gax-go/v2 v2.0.5\ngo: downloading github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9\ngo: downloading github.com/mitchellh/reflectwalk v1.0.1\ngo: downloading github.com/apparentlymart/go-cidr v1.0.1\ngo: downloading github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af\n# github.com/libvirt/libvirt-go\nexec: \"gcc\": executable file not found in $PATH\ngmake: *** [Makefile:14: install] Error 2\n", "stderr_lines": ["go: downloading github.com/hashicorp/terraform-plugin-sdk v1.4.0", "go: downloading github.com/davecgh/go-spew v1.1.1", "go: downloading github.com/mitchellh/packer v1.3.2", "go: downloading github.com/libvirt/libvirt-go v5.0.0+incompatible", "go: downloading github.com/libvirt/libvirt-go-xml v5.0.0+incompatible", "go: downloading github.com/hooklift/iso9660 v1.0.0", "go: downloading github.com/c4milo/gotoolkit v0.0.0-20170704181456-e37eeabad07e", "go: downloading github.com/zclconf/go-cty v1.1.1", "go: downloading github.com/hashicorp/errwrap v1.0.0", "go: downloading github.com/hashicorp/go-hclog v0.10.0", "go: downloading github.com/hashicorp/go-plugin v1.0.1", "go: downloading google.golang.org/grpc v1.25.1", "go: downloading github.com/hashicorp/hcl v1.0.0", "go: downloading github.com/hashicorp/logutils v1.0.0", "go: downloading github.com/hashicorp/hcl/v2 v2.1.0", "go: downloading github.com/agext/levenshtein v1.2.2", "go: downloading github.com/hashicorp/terraform-svchost v0.0.0-20191119180714-d2e4933b9136", "go: downloading github.com/hashicorp/yamux v0.0.0-20190923154419-df201c70410d", "go: downloading github.com/hashicorp/go-version v1.2.0", "go: downloading github.com/mitchellh/cli v1.0.0", "go: downloading github.com/mitchellh/mapstructure v1.1.2", "go: downloading github.com/mitchellh/go-testing-interface v1.0.0", "go: downloading github.com/mitchellh/copystructure v1.0.0", "go: downloading golang.org/x/text v0.3.2", "go: downloading github.com/vmihailenco/msgpack v4.0.4+incompatible", "go: downloading github.com/oklog/run v1.0.0", "go: downloading github.com/golang/protobuf v1.3.2", "go: downloading github.com/hashicorp/go-uuid v1.0.1", "go: downloading github.com/hashicorp/go-multierror v1.0.0", "go: downloading github.com/mitchellh/go-wordwrap v1.0.0", "go: downloading golang.org/x/net v0.0.0-20191204025024-5ee1b9f4859a", "go: downloading github.com/spf13/afero v1.2.2", "go: downloading github.com/hashicorp/terraform-config-inspect v0.0.0-20191121111010-e9629612a215", "go: downloading github.com/apparentlymart/go-textseg v1.0.0", "go: downloading github.com/hashicorp/go-getter v1.4.0", "go: downloading github.com/bgentry/speakeasy v0.1.0", "go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db", "go: downloading github.com/armon/go-radix v1.0.0", "go: downloading golang.org/x/crypto v0.0.0-20191202143827-86a70503ff7e", "go: downloading golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6", "go: downloading github.com/hashicorp/go-safetemp v1.0.0", "go: downloading github.com/ulikunitz/xz v0.5.6", "go: downloading github.com/google/go-cmp v0.3.1", "go: downloading golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e", "go: downloading github.com/posener/complete v1.2.3", "go: downloading github.com/aws/aws-sdk-go v1.25.47", "go: downloading github.com/mitchellh/go-homedir v1.1.0", "go: downloading github.com/hashicorp/go-cleanhttp v0.5.1", "go: downloading github.com/fatih/color v1.7.0", "go: downloading github.com/mattn/go-isatty v0.0.10", "go: downloading github.com/mattn/go-colorable v0.1.4", "go: downloading google.golang.org/api v0.14.0", "go: downloading github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d", "go: downloading github.com/zclconf/go-cty-yaml v1.0.1", "go: downloading google.golang.org/genproto v0.0.0-20191203220235-3fa9dbf08042", "go: downloading cloud.google.com/go v0.49.0", "go: downloading cloud.google.com/go/storage v1.4.0", "go: downloading go.opencensus.io v0.22.2", "go: downloading github.com/google/uuid v1.1.1", "go: downloading github.com/googleapis/gax-go/v2 v2.0.5", "go: downloading github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9", "go: downloading github.com/mitchellh/reflectwalk v1.0.1", "go: downloading github.com/apparentlymart/go-cidr v1.0.1", "go: downloading github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af", "# github.com/libvirt/libvirt-go", "exec: \"gcc\": executable file not found in $PATH", "gmake: *** [Makefile:14: install] Error 2"], "stdout": "go install -ldflags \"-X main.version=$(git describe --always --abbrev=40 --dirty)\"\n", "stdout_lines": ["go install -ldflags \"-X main.version=$(git describe --always --abbrev=40 --dirty)\""]}

virtualization_packages is missing?

Looking at the 01_install_virtualization_tools.yml play in tasks, name: Install qemu, libvirt, git it seems play with_items is missing. Is there anywhere else it is mentioned?

Questions about this provisioning system

Hello mate, I'm looking into making use of this provisioner on my machine but before doing so I had some questions.

  • If I have existing VMs configured on my host machines, could they be affected by running this provisioner ?
    • Will my existing pools, network configs, vms etc. be affected or deleted ?
  • Once an infra is first provisioned, can it be scaled (node count, resource attribution) up without loss of data?
  • Could one take down the provisioned infra while persisting data for migration purposes ?

I'm sorry if any of my questions don't really make sense as I'm not very knowledgeable in this space. Thanks!

22_apply_network_plugin.yml let it wait a little longer

I'd suggest setting wait_timeout: 600.
In my local deployment in takes around 3 minutes for all coredns pods to become ready,
but the default timeout for wait is 120 (2 minutes).
As a result ->

<...> FAILED! => {"changed": false, "duration": 120, "method": "update", "msg": "\"Deployment\" \"coredns\": Timed out waiting on resource", <...>

I suggest:

    - name: Wait for core-dns pods to be up and running
      kubernetes.core.k8s:
        state: present
        kind: Deployment
        name: coredns
...
        wait: true
        wait_timeout: 600

skipping: no hosts matched

I may have missed a config but these are the errors I am getting after ansible-galaxy collection install -r requirements.yml and ansible-playbook main.yml

skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: masters
[WARNING]: Could not match supplied host pattern, ignoring: workers

I am, unsure where I must define hosts file that ansible will use, and, if this repo will provision VMs at all.

Should I create VMs before running ansible-playbook main.yml or I can define the hosts somewhere?

VMs are not reachable once provisioned

The master and worker VMs start, but something seems to be wrong with the terraform output..

$ sudo virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     k8s-master-0                   running
 2     k8s-master-1                   running
 3     k8s-worker-0                   running
 4     k8s-worker-1                   running
TASK [Wait 600 seconds for target connection to become reachable/usable] *********************************************************************************************************************
fatal: [k8s-worker-0]: FAILED! => {"changed": false, "elapsed": 120, "msg": "timed out waiting for ping module test success: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-worker-0: Name or service not known"}
fatal: [k8s-master-1]: FAILED! => {"changed": false, "elapsed": 120, "msg": "timed out waiting for ping module test success: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-master-1: Name or service not known"}
fatal: [k8s-worker-1]: FAILED! => {"changed": false, "elapsed": 121, "msg": "timed out waiting for ping module test success: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-worker-1: Name or service not known"}
fatal: [k8s-master-0]: FAILED! => {"changed": false, "elapsed": 125, "msg": "timed out waiting for ping module test success: Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-master-0: Name or service not known"}
[masters]$ terraform output
ips = []
macs = [
  "52:54:00:CE:58:8B",
  "52:54:00:02:39:4E",
]
[masters]$ cd ../workers/
[workers]$ terraform output
ips = []
macs = [
  "52:54:00:EA:A4:D0",
  "52:54:00:1A:97:C5",
]

`gopath` is undefined

01_install_virtualization_tools.yml:47: src: "{{ gopath }}/bin/terraform-provider-libvirt"

TASK [Copy libvirt provider to plugins directory] ********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'gopath' is undefined\n\nThe error appears to be in '/home/vance/libvirt-k8s-provisioner/01_install_virtualization_tools.yml': line 45, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n    - name: Copy libvirt provider to plugins directory\n      ^ here\n"}

missing go prereq

TASK [Run 'install' target on libivrt-provider] **********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/gmake install", "msg": "/bin/sh: go: command not found\ngmake: *** [Makefile:14: install] Error 127", "rc": 2, "stderr": "/bin/sh: go: command not found\ngmake: *** [Makefile:14: install] Error 127\n", "stderr_lines": ["/bin/sh: go: command not found", "gmake: *** [Makefile:14: install] Error 127"], "stdout": "go install -ldflags \"-X main.version=$(git describe --always --abbrev=40 --dirty)\"\n", "stdout_lines": ["go install -ldflags \"-X main.version=$(git describe --always --abbrev=40 --dirty)\""]}

Debian support

Love this project and the documentation, but is there any particular reason why debian isn't supported?

Great work, first time finder.

Hey, Excellent work here and you have saved me a huge burden on my home lab set up.

one small note of my onboarding is that just running

ansible-playbook main.yml

gave me some warnings and failed or just finished very quickly

I changed a simple -i and I am now seemingly in business.

ansible-playbook main.yml -i inventory

missing netaddr prereq

TASK [Define libvirt network] ****************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleFilterError'>, original message: The next_nth_usable filter requires python's netaddr be installed on the ansible controller"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.