GithubHelp home page GithubHelp logo

vitobotta / hetzner-k3s Goto Github PK

View Code? Open in Web Editor NEW
1.6K 29.0 112.0 40.46 MB

The easiest and quickest way to create and manage Kubernetes clusters in Hetzner Cloud using the lightweight distribution k3s by Rancher.

License: MIT License

Crystal 96.05% Shell 3.95%
kubernetes k8s devops hetzner hetzner-cloud hetzner-api docker k3s crystal crystal-lang

hetzner-k3s's Introduction

GitHub release (latest SemVer) GitHub Release Date GitHub last commit GitHub issues GitHub pull requests GitHub GitHub Discussions GitHub top language

GitHub forks GitHub Repo stars


The easiest way to create production grade Kubernetes clusters in Hetzner Cloud

What is this?

This is a CLI tool to quickly create and manage Kubernetes clusters in Hetzner Cloud using the lightweight Kubernetes distribution k3s from Rancher.

Hetzner Cloud is an awesome cloud provider which offers a truly great service with the best performance/cost ratio in the market and locations in both Europe and USA.

k3s is my favorite Kubernetes distribution because it uses much less memory and CPU, leaving more resources to workloads. It is also super quick to deploy and upgrade because it's a single binary.

Using hetzner-k3s, creating a highly available k3s cluster with 3 masters for the control plane and 3 worker nodes takes 2-3 minutes only. This includes

  • creating all the infrastructure resources (servers, private network, firewall, load balancer for the API server for HA clusters)
  • deploying k3s to the nodes
  • installing the Hetzner Cloud Controller Manager to provision load balancers right away
  • installing the Hetzner CSI Driver to provision persistent volumes using Hetzner's block storage
  • installing the Rancher System Upgrade Controller to make upgrades to a newer version of k3s easy and quick
  • installing the Cluster Autoscaler to allow for autoscaling node pools

Also see this wiki page for a tutorial on how to set up a cluster with the most common setup to get you started.

If you like this project and would like to help its development, consider becoming a sponsor.


Who am I?

I'm a Senior Backend Engineer and DevOps based in Finland and working for event management platform Brella.

I also write a technical blog on programming, DevOps and related technologies.


Prerequisites

All that is needed to use this tool is

  • an Hetzner Cloud account

  • an Hetzner Cloud token: for this you need to create a project from the cloud console, and then an API token with both read and write permissions (sidebar > Security > API Tokens); you will see the token only once, so be sure to take note of it somewhere safe

  • kubectl installed


Installation

Before using the tool, be sure to have kubectl installed as it's required to install some components in the cluster and perform k3s upgrades.

macOS

With Homebrew

brew install vitobotta/tap/hetzner_k3s

Binary installation

You need to install these dependencies first:

  • libssh2
  • libevent
  • bdw-gc
  • libyaml
  • pcre
  • gmp
Intel
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v1.1.5/hetzner-k3s-macos-amd64
chmod +x hetzner-k3s-macos-amd64
sudo mv hetzner-k3s-macos-amd64 /usr/local/bin/hetzner-k3s
Apple Silicon / M1
wget https://github.com/vitobotta/hetzner-k3s/releases/download/v1.1.5/hetzner-k3s-macos-arm64
chmod +x hetzner-k3s-macos-arm64
sudo mv hetzner-k3s-macos-arm64 /usr/local/bin/hetzner-k3s

Linux

amd64

wget https://github.com/vitobotta/hetzner-k3s/releases/download/v1.1.5/hetzner-k3s-linux-amd64
chmod +x hetzner-k3s-linux-amd64
sudo mv hetzner-k3s-linux-amd64 /usr/local/bin/hetzner-k3s

arm

wget https://github.com/vitobotta/hetzner-k3s/releases/download/v1.1.5/hetzner-k3s-linux-arm64
chmod +x hetzner-k3s-linux-arm64
sudo mv hetzner-k3s-linux-arm64 /usr/local/bin/hetzner-k3s

Windows

I recommend using the Linux binary under WSL.


Creating a cluster

The tool requires a simple configuration file in order to create/upgrade/delete clusters, in the YAML format like in the example below (commented lines are for optional settings):

---
hetzner_token: <your token>
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.26.4+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
use_ssh_agent: false # set to true if your key has a passphrase or if SSH connections don't work or seem to hang without agent. See https://github.com/vitobotta/hetzner-k3s#limitations
# ssh_port: 22
ssh_allowed_networks:
  - 0.0.0.0/0 # ensure your current IP is included in the range
api_allowed_networks:
  - 0.0.0.0/0 # ensure your current IP is included in the range
private_network_subnet: 10.0.0.0/16 # ensure this doesn't overlap with other networks in the same project
disable_flannel: false # set to true if you want to install a different CNI
schedule_workloads_on_masters: false
# cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
# service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
# cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range
# enable_public_net_ipv4: false # default is true
# enable_public_net_ipv6: false # default is true
# image: rocky-9 # optional: default is ubuntu-22.04
# autoscaling_image: 103908130 # optional, defaults to the `image` setting
# snapshot_os: microos # optional: specified the os type when using a custom snapshot
# cloud_controller_manager_manifest_url: "https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.19.0/ccm-networks.yaml"
# csi_driver_manifest_url: "https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.6.0/deploy/kubernetes/hcloud-csi.yml"
# system_upgrade_controller_deployment_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/system-upgrade-controller.yaml"
# system_upgrade_controller_crd_manifest_url: "https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/crd.yaml"
# cluster_autoscaler_manifest_url: "https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml"
datastore:
  mode: etcd # etcd (default) or external
  external_datastore_endpoint: postgres://....
masters_pool:
  instance_type: cpx21
  instance_count: 3
  location: nbg1
worker_node_pools:
- name: small-static
  instance_type: cpx21
  instance_count: 4
  location: hel1
  # image: debian-11
  # labels:
  #   - key: purpose
  #     value: blah
  # taints:
  #   - key: something
  #     value: value1:NoSchedule
- name: big-autoscaled
  instance_type: cpx31
  instance_count: 2
  location: fsn1
  autoscaling:
    enabled: true
    min_instances: 0
    max_instances: 3
# additional_packages:
# - somepackage
# post_create_commands:
# - apt update
# - apt upgrade -y
# - apt autoremove -y
# enable_encryption: true
# existing_network: <specify if you want to use an existing network, otherwise one will be created for this cluster>
# kube_api_server_args:
# - arg1
# - ...
# kube_scheduler_args:
# - arg1
# - ...
# kube_controller_manager_args:
# - arg1
# - ...
# kube_cloud_controller_manager_args:
# - arg1
# - ...
# kubelet_args:
# - arg1
# - ...
# kube_proxy_args:
# - arg1
# - ...
# api_server_hostname: k8s.example.com # optional: DNS for the k8s API LoadBalancer. After the script has run, create a DNS record with the address of the API LoadBalancer.

Most settings should be self explanatory; you can run hetzner-k3s releases to see a list of the available k3s releases.

If you don't want to specify the Hetzner token in the config file (for example if you want to use the tool with CI or want to safely commit the config file to a repository), then you can use the HCLOUD_TOKEN environment variable instead, which has precedence.

If you set masters_pool.instance_count to 1 then the tool will create a non highly available control plane; for production clusters you may want to set it to a number greater than 1. This number must be odd to avoid split brain issues with etcd and the recommended number is 3.

You can specify any number of worker node pools, static or autoscaled, and have mixed nodes with different specs for different workloads.

Hetzner cloud init settings (additional_packages & post_create_commands) can be defined in the configuration file at root level as well as for each pool if different settings are needed for different pools. If these settings are configured for a pool, these override the settings at root level.

At the moment Hetzner Cloud has five locations: two in Germany (nbg1, Nuremberg and fsn1, Falkenstein), one in Finland (hel1, Helsinki) and two in the USA (ash, Ashburn, Virginia, and hil, Hillsboro, Oregon). Please keep in mind that US locations only offer instances with AMD CPUs at the moment, while the newly introduced ARM instances are only available in Falkenstein-fsn1 for now.

For the available instance types and their specs, either check from inside a project when adding a server manually or run the following with your Hetzner token:

curl -H "Authorization: Bearer $API_TOKEN" 'https://api.hetzner.cloud/v1/server_types'

To create the cluster run:

hetzner-k3s create --config cluster_config.yaml

This will take a few minutes depending on the number of masters and worker nodes.

Disabling public IPs (IPv4 or IPv6 or both) on nodes

With enable_public_net_ipv4: false and enable_public_net_ipv6: false you can disable the public interface for all nodes for improved security and saving on ipv4 addresses costs. These settings are global and effects all master and worker nodes. If you disable public IPs be sure to run hetzer-k3s from a machine that has access to the same private network as the nodes either directly or via some VPN. Additional networking setup is required via cloud init, so it's important that the machine from which you run hetzner-k3s have internet access and DNS configured correctly, otherwise the cluster creation process will get stuck after creating the nodes. See this discussion for additional information and instructions.

Using alternative OS images

By default, the image in use is ubuntu-22.04 for all the nodes, but you can specify a different default image with the root level image config option or even different images for different static node pools by setting the image config option in each node pool. This way you can, for example, have some node pools with ARM instances use the correct OS image for ARM. To do this and use say Ubuntu 22.04 on ARM instances, set image to 103908130 with a specific image ID. With regard to autoscaling, due to a limitation in the Cluster Autoscaler for Hetzner it is not possible yet to specify a different image for each autoscaled pool, so for now you can specify the image for all autoscaled pools by setting the autoscaling_image setting if you want to use an image different from the one specified in image.

To see the list of available images, run the following:

export API_TOKEN=...

curl -H "Authorization: Bearer $API_TOKEN" 'https://api.hetzner.cloud/v1/images?per_page=100'

Besides the default OS images, It's also possible to use a snapshot that you have already created from an existing server. Also with custom snapshots you'll need to specify the ID of the snapshot/image, not the description you gave when you created the template server.

I've tested snapshots for openSUSE MicroOS but others might work too. You can easily create a snapshot for MicroOS using this tool. Creating the snapshot takes just a couple of minutes and then you can use it with hetzner-k3s by setting the config option image to the ID of the snapshot, and snapshot_os to microos.

Limitations:

  • if possible, please use modern SSH keys since some operating systems have deprecated old crypto based on SHA1; therefore I recommend you use ECDSA keys instead of the old RSA type
  • if you use a snapshot instead of one of the default images, the creation of the servers will take longer than when using a regular image
  • the setting api_allowed_networks allows specifying which networks can access the Kubernetes API, but this only works with single master clusters currently. Multi-master HA clusters require a load balancer for the API, but load balancers are not yet covered by Hetzner's firewalls
  • if you enable autoscaling for one or more nodepools, do not change that setting afterwards as it can cause problems to the autoscaler
  • autoscaling is only supported when using Ubuntu or one of the other default images, not snapshots
  • worker nodes created by the autoscaler must be deleted manually from the Hetzner Console when deleting the cluster (this will be addressed in a future update)
  • SSH keys with passphrases can only be used if you set use_ssh_agent to true and use an SSH agent to access your key. To start and agent e.g. on macOS:
eval "$(ssh-agent -s)"
ssh-add --apple-use-keychain ~/.ssh/<private key>

Idempotency

The create command can be run any number of times with the same configuration without causing any issue, since the process is idempotent. This means that if for some reason the create process gets stuck or throws errors (for example if the Hetzner API is unavailable or there are timeouts etc), you can just stop the current command, and re-run it with the same configuration to continue from where it left.

Adding nodes

To add one or more nodes to a node pool, just change the instance count in the configuration file for that node pool and re-run the create command.

Important: if you are increasing the size of a node pool created prior to v0.5.7, please see this thread.

Scaling down a node pool

To make a node pool smaller:

  • decrease the instance count for the node pool in the configuration file so that those extra nodes are not recreated in the future
  • delete the nodes from Kubernetes (kubectl delete node <name>)
  • delete the instances from the cloud console if the Cloud Controller Manager doesn't delete it automatically (make sure you delete the correct ones 🤭)

In a future release I will add some automation for the cleanup.

Replacing a problematic node

  • delete the node from Kubernetes (kubectl delete node <name>)
  • delete the correct instance from the cloud console
  • re-run the create command. This will re-create the missing node and have it join to the cluster

Converting a non-HA cluster to HA

It's easy to convert a non-HA with a single master cluster to HA with multiple masters. Just change the masters instance count and re-run the create command. This will create a load balancer for the API server and update the kubeconfig so that all the API requests go through the load balancer.


Upgrading to a new version of k3s

If it's the first time you upgrade the cluster, all you need to do to upgrade it to a newer version of k3s is run the following command:

hetzner-k3s upgrade --config cluster_config.yaml --new-k3s-version v1.27.1-rc2+k3s1

So you just need to specify the new k3s version as an additional parameter and the configuration file will be updated with the new version automatically during the upgrade. To see the list of available k3s releases run the command hetzner-k3s releases.

Note: (single master clusters only) the API server will briefly be unavailable during the upgrade of the controlplane.

To check the upgrade progress, run watch kubectl get nodes -owide. You will see the masters being upgraded one per time, followed by the worker nodes.

NOTE: if you haven't used the tool in a while before upgrading, you may need to delete the file cluster_config.yaml.example in your temp folder to refresh the list of available k3s versions.

What to do if the upgrade doesn't go smoothly

If the upgrade gets stuck for some reason, or it doesn't upgrade all the nodes:

  1. Clean up the existing upgrade plans and jobs, and restart the upgrade controller
kubectl -n system-upgrade delete job --all
kubectl -n system-upgrade delete plan --all

kubectl label node --all plan.upgrade.cattle.io/k3s-server- plan.upgrade.cattle.io/k3s-agent-

kubectl -n system-upgrade rollout restart deployment system-upgrade-controller
kubectl -n system-upgrade rollout status deployment system-upgrade-controller

You can also check the logs of the system upgrade controller's pod:

kubectl -n system-upgrade \
  logs -f $(kubectl -n system-upgrade get pod -l pod-template-hash -o jsonpath="{.items[0].metadata.name}")

A final note about upgrades is that if for some reason the upgrade gets stuck after upgrading the masters and before upgrading the worker nodes, just cleaning up the resources as described above might not be enough. In that case also try running the following to tell the upgrade job for the workers that the masters have already been upgraded, so the upgrade can continue for the workers:

kubectl label node <master1> <master2> <master2> plan.upgrade.cattle.io/k3s-server=upgraded

Upgrading the OS on nodes

  • consider adding a temporary node during the process if you don't have enough spare capacity in the cluster
  • drain one node
  • update etc
  • reboot
  • uncordon
  • proceed with the next node

If you want to automate this process I recommend you install the Kubernetes Reboot Daemon ("Kured"). For this to work properly, make sure the OS you choose for the nodes has unattended upgrades enabled at least for security updates. For example if the image is Ubuntu, you can add this to the configuration file before running the create command:

additional_packages:
- unattended-upgrades
- update-notifier-common
post_create_commands:
- sudo systemctl enable unattended-upgrades
- sudo systemctl start unattended-upgrades

Check the Kured documentation for configuration options like maintenance window etc.


Deleting a cluster

To delete a cluster, running

hetzner-k3s delete --config cluster_config.yaml

This will delete all the resources in the Hetzner Cloud project created by hetzner-k3s directly.

NOTE: at the moment instances created by the cluster autoscaler, as well as load balancers and persistent volumes created by deploying your applications must be deleted manually. This may be addressed in a future release.


Additional info

Load balancers

Once the cluster is ready, you can already provision services of type LoadBalancer for your workloads (such as the Nginx ingress controller for example) thanks to the Hetzner Cloud Controller Manager that is installed automatically.

There are some annotations that you can add to your services to configure the load balancers. At a minimum your need these two:

load-balancer.hetzner.cloud/location: nbg1 # must ensure the network location of the load balancer is same as for the nodes
load-balancer.hetzner.cloud/use-private-ip: "true" # ensures the traffic between LB and nodes goes through the private network, so you don't need to change anything in the firewall

The above are required, but I also recommend these:

load-balancer.hetzner.cloud/hostname: <a valid fqdn>
load-balancer.hetzner.cloud/http-redirect-https: 'false'
load-balancer.hetzner.cloud/name: <lb name>
load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'

I set load-balancer.hetzner.cloud/hostname to a valid hostname that I configure (after creating the load balancer) with the IP of the load balancer; I use this together with the annotation load-balancer.hetzner.cloud/uses-proxyprotocol: 'true' to enable the proxy protocol. Reason: I enable the proxy protocol on the load balancers so that my ingress controller and applications can "see" the real IP address of the client. However when this is enabled, there is a problem where cert-manager fails http01 challenges; you can find an explanation of why here but the easy fix provided by some providers - including Hetzner - is to configure the load balancer so that it uses a hostname instead of an IP. Again, read the explanation for the reason but if you care about seeing the actual IP of the client then I recommend you use these two annotations.

The other annotations should be self explanatory. You can find a list of the available annotations here.

Note: in a future release it will be possible to configure ingress controllers with host ports, so it will be possible to use an ingress without having to buy a load balancer, but for the time being a load balancer is still required.

Persistent volumes

Once the cluster is ready you can create persistent volumes out of the box with the default storage class hcloud-volumes, since the Hetzner CSI driver is installed automatically. This will use Hetzner's block storage (based on Ceph so it's replicated and highly available) for your persistent volumes. Note that the minimum size of a volume is 10Gi. If you specify a smaller size for a volume, the volume will be created with a capacity of 10Gi anyway.

Keeping a project per cluster

If you want to create multiple clusters per project, see Configuring Cluster-CIDR and Service-CIDR. Make sure, that every cluster has its own dedicated Cluster- and Service-CIDR. If they overlap, it will cause problems. But I still recommend keeping clusters separated from each other. This way, if you want to delete a cluster with all the resources created for it, you can just delete the project.

Configuring Cluster-CIDR and Service-CIDR

Cluster-CIDR and Service-CIDR describe the IP-Ranges that are used for pods and services respectively. Under normal circumstances you should not need to change these values. However, advanced scenarios may require you to change them to avoid networking conflicts.

Changing the Cluster-CIDR (Pod IP-Range):

To change the Cluster-CIDR, uncomment/add the cluster_cidr option in your cluster configuration file and provide a valid CIDR notated network to use. The provided network must not be a subnet of your private network.

Changing the Service-CIDR (Service IP-Range):

To change the Service-CIDR, uncomment/add the service_cidr option in your cluster configuration file and provide a valid CIDR notated network to use. The provided network must not be a subnet of your private network.

Also uncomment the cluster_dns option and provide a single IP-Address from your service_cidr range. cluster_dns sets the IP-Address of the coredns service.

Sizing the Networks

The networks you provide should provide enough space for the expected amount of pods/services. By default /16 networks are used. Please make sure you chose an adequate size, as changing the CIDR afterwards is not supported.


Troubleshooting

If the tool hangs forever after creating servers and you see timeouts, this may be caused by problems with your SSH key, for example if you use a key with a passphrase or an older key (due to the deprecation of some crypto stuff in newwer operating systems). In this case you may want to try setting use_ssh_agent to true to use the SSH agent. If you are not familiar with what an SSH agent is, take a look at this page for an explanation.


Contributing and support

Please create a PR if you want to propose any changes, or open an issue if you are having trouble with the tool - I will do my best to help if I can.

If you would like to financially support the project, consider becoming a sponsor.


Building from source

This tool is written in Crystal. To build it, or to make some changes in the code and try them, you will need to install Crystal locally, or to work in a container.

This repository contains a Dockerfile that builds a container image with Crystal as well as the other required dependencies. There is also a Compose file to conveniently run a container using that image, and mount the source code into the container. Finally, there is a devcontainer file that you can use with compatible IDEs like Visual Studio Code and the Dev Containers extension.

Developing with VSCode

You need Visual Studio Code and the Dev Containers extension. Open the project in VSCode (for instance, by executing code . in the root directory of this git repository). You should see a pop-up dialog prompting you to "Reopen in Container". Do that, then wait until the build is complete and the server has started; then click on "+" to open a terminal inside the container.

Note: if for some reason you can't find the Dev Containers extension in the Marketplace (for instance, if the first result is the Docker extension instead of Dev Containers), check that you have the official build of VSCode. It looks like if you're running an Open Source build, some extensions are disabled.

Developing with Compose

If you can't or won't install VSCode, you can also develop in the exact same container with Docker and Compose.

To build and run the development container, run:

docker compose up -d

Then, to enter the container:

docker compose exec hetzner-k3s bash

Inside the container

Once you are inside the dev container (whether you used VSCode or directly Docker Compose), you can run hetzner-k3s like this:

crystal run ./src/hetzner-k3s.cr -- create --config cluster_config.yaml

To generate a binary, you can do:

crystal build ./src/hetzner-k3s.cr --static

The --static flag will make sure that the resulting binary is statically linked, and doesn't have dependencies on libraries that may or may not be available on the system where you will want to run it.


License

This tool is available as open source under the terms of the MIT License.


Code of conduct

Everyone interacting in the hetzner-k3s project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.

Stargazers over time

Stargazers over time

hetzner-k3s's People

Contributors

acschm1d avatar aleksasiriski avatar compidev avatar creib avatar cwilhelm avatar derlinuxer avatar easystartup-io avatar floppy012 avatar funzinator avatar janosmiko avatar jpetazzo avatar khustochka avatar lloesche avatar malte-j avatar mgalesloot avatar mike667 avatar n3rdc4ptn avatar pysen avatar quorak avatar systeemkabouter avatar szepeviktor avatar tunatoksoz avatar vitobotta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hetzner-k3s's Issues

host key mismatch

im pretty sure the reason is that i had this ip address in the past

#<Thread:0x0000558fa7638238 /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	22: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144:in `block (2 levels) in create_resources'
	21: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `wait_for_ssh'
	20: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `loop'
	19: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:439:in `block in wait_for_ssh'
	18: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:452:in `ssh'
	17: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	16: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	15: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:90:in `initialize'
	14: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `wait'
	13: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `loop'
	12: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:225:in `block in wait'
	11: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `poll_message'
	10: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `loop'
	 9: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:210:in `block in poll_message'
	 8: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:184:in `accept_kexinit'
	 7: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:245:in `proceed!'
	 6: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:445:in `exchange_keys'
	 5: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:49:in `exchange_keys'
	 4: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:77:in `verify_server_key'
	 3: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new_or_local_tunnel.rb:17:in `verify'
	 2: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new.rb:18:in `verify'
	 1: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:32:in `verify'
/var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:50:in `process_cache_miss': fingerprint SHA256:67JzxGepeDTloRSotU9vlZ7OuucQBji3F5Qw7otu6xU does not match for "116.203.224.30" (Net::SSH::HostKeyMismatch)
Traceback (most recent call last):
	22: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144:in `block (2 levels) in create_resources'
	21: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `wait_for_ssh'
	20: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `loop'
	19: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:439:in `block in wait_for_ssh'
	18: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:452:in `ssh'
	17: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	16: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	15: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:90:in `initialize'
	14: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `wait'
	13: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `loop'
	12: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:225:in `block in wait'
	11: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `poll_message'
	10: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `loop'
	 9: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:210:in `block in poll_message'
	 8: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:184:in `accept_kexinit'
	 7: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:245:in `proceed!'
	 6: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:445:in `exchange_keys'
	 5: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:49:in `exchange_keys'
	 4: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:77:in `verify_server_key'
	 3: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new_or_local_tunnel.rb:17:in `verify'
	 2: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new.rb:18:in `verify'
	 1: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:32:in `verify'
/var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:50:in `process_cache_miss': fingerprint SHA256:67JzxGepeDTloRSotU9vlZ7OuucQBji3F5Qw7otu6xU does not match for "116.203.224.30" (Net::SSH::HostKeyMismatch)

missing documentation on how to specify a target project

I've tried to read thoroughly the documentation, but could not find where to specify the Hetzner cloud project where the cluster will be provisioned, until I noted that the hetzner_token in the YAML config file is project related.

It may be good to have this documented in the README.md

Looking forward to test this for the first time! Thany you @vitobotta !

cluster_config.yaml.example - needs update

I copied the cluster_config.yaml.example to my personal k3s-hetzner.yaml

it took me some time to figured out, that

  • ssh_key_path is now public_ssh_key_path
  • private_ssh_key_path is missing
  • ssh_allowed_network is missing

create_cluster.yaml.example is != of readme section "Creating a cluster".

Allow to customize use used image

It would be nice to be able to use custom image. E.x a Snapshot

I installed longhorn on me Cluster and wanted to use a RWX PVC. NFS Support Must be installed in the system. I would like to install it once, create a snapshot and use it as base for the nodes.

Issues on the execution of main command

Hi there! Looks like is not able to execute:

I did a clean and a docker attempt:

hetzner-k3s create-cluster --config-file=./cluster/test_filled.yaml 

________________________________________________________________________________
| ~/Documents/Code/ubloquity/terraform-k8s-hetzner-DigitalOcean-Federation/hetzner_03 @ jperez-mbp (jperez)
| => hetzner-k3s create-cluster --config-file=./cluster/test_filled.yaml

Firewall already exists, skipping.


Private network already exists, skipping.


Creating SSH key...
...SSH key created.

Traceback (most recent call last):
	10: from /usr/local/bin/hetzner-k3s:23:in `<main>'
	 9: from /usr/local/bin/hetzner-k3s:23:in `load'
	 8: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/exe/hetzner-k3s:4:in `<top (required)>'
	 7: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
	 6: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
	 5: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	 4: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
	 3: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
	 2: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/k3s/cluster.rb:39:in `create'
	 1: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/k3s/cluster.rb:105:in `create_resources'
/Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/infra/ssh_key.rb:26:in `create': undefined method `[]' for nil:NilClass (NoMethodError)
________________________________________________________________________________
| ~/Documents/Code/ubloquity/terraform-k8s-hetzner-DigitalOcean-Federation/hetzner_03 @ jperez-mbp (jperez)

What I am doing is a prev envsub

envsubst < ./cluster/test.yaml | tee ./cluster/test_filled.yaml 

Where I am redacting:

hetzner_token: 0lQ5BEtHPChange_me9YVUOSIiOj8Kt68LNM2bV
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa_no_pass.pub"
private_ssh_key_path: "~/.ssh/id_rsa_no_pass"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: hel1
schedule_workloads_on_masters: false
masters:
  instance_type: cpx21
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 3
- name: big
  instance_type: cpx31
  instance_count: 2

Any ideas? Cheers!

Feature Request: add possiblity to install custom packages on Worker Nodes

When I create a worker node using hetzner-k3s, there are some packages which I have to manually install. The most useful example in my case is nfs-common which is required for multiple storage classes (eg. longhorn PVC in RWX mode).

Could you make it possible to add an additional_packages list to the config.yaml. These packages could be a part of user-data in hetzner install.

eg.:

---
hetzner_token: 1234
cluster_name: name
kubeconfig_path: "/cluster/kubeconfig"
k3s_version: v1.21.6+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
schedule_workloads_on_masters: false
additional_packages:
  - "nfs-common"
masters:
  instance_type: cpx41
  instance_count: 1
worker_node_pools:
  - name: worker
    instance_type: cpx51
    instance_count: 3

[Windows] Create cluster - No such file or directory

Hello,

the program stop in the cloud controller manager setup. Is there a way to fix it?

Deploying Hetzner Cloud Controller Manager...
Traceback (most recent call last):
        10: from C:/Ruby27-x64/bin/hetzner-k3s:23:in `<main>'
         9: from C:/Ruby27-x64/bin/hetzner-k3s:23:in `load'
         8: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/exe/hetzner-k3s:4:in `<top (required)>'
         7: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
         6: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
         5: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
         4: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
         3: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
         2: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:46:in `create'
         1: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:346:in `deploy_cloud_controller_manager'
C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:346:in `write': No such file or directory @ rb_sysopen - /tmp/cloud-controller-manager.yaml (Errno::ENOENT)

Thanks in advance!

Regards
ludgart

Create cluster with docker - fails because of an error

Hi!

I'm trying to create a cluster with the docker command, but no matter what I try it doesn't work.
Do you have an example where you don't use ${PWD} and ${HOME}? So eg use a full path.
because somehow that's not recognized on my Win10 laptop (I'm using Docker Desktop)

Folder structure:

C:\Users\John\Downloads\vitobotta\cluster.yaml
C:\Users\John\Downloads\vitobotta\.ssh

What i tried:

C:\Users\John\Downloads\vitobotta>docker run --rm -it -v C:\Users\John\Downloads\vitobotta:/cluster -v C:\Users\John\Downloads\vitobotta.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.3 create-cluster --config-file cluster.yaml

It gives me this error message everytime:

chmod: /root/.ssh/*: No such file or directory
chmod: /root/.ssh/*.pub: No such file or directory
Please specify a correct path for the config file.
Traceback (most recent call last):
        8: from /usr/local/bundle/bin/hetzner-k3s:23:in `<main>'
        7: from /usr/local/bundle/bin/hetzner-k3s:23:in `load'
        6: from /usr/local/bundle/gems/hetzner-k3s-0.4.3/exe/hetzner-k3s:4:in `<top (required)>'
        5: from /usr/local/bundle/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
        4: from /usr/local/bundle/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
        3: from /usr/local/bundle/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
        2: from /usr/local/bundle/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
        1: from /usr/local/bundle/gems/hetzner-k3s-0.4.3/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
/usr/local/bundle/gems/hetzner-k3s-0.4.3/lib/hetzner/k3s/cli.rb:355:in `find_hetzner_token': undefined method `dig' for nil:NilClass (NoMethodError)

When I execute this one:
docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.3 create-cluster --config-file cluster.yaml
it gives me:

docker: Error response from daemon: create ${PWD}: "${PWD}" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.

Thats why Ive tried to change the 2 volumes ( PWD and HOME ). I can change the PWD volume without getting an error, but the HOME volume keeps giving me errors.

Thanks!

Mysql for k3s db

Hi, as per k3s HA installation guide they need external db as the k3s db quite huge.

Did you see any issue with embedded db?

block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)

First tun I got this exceptions. Second run the script installed kubernetes as expected. Maybe increase timeout?

#<Thread:0x000000011009bd00 /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
#<Thread:0x000000011009bc10 /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'

No kubeconfig with docker

Hi, I tried to use your tool by installing it in Ubuntu, but I had problems with old Ruby versions so I decided to try to use your Docker image. The problem that I'm encountering is kubeconfig doesn't get generated for some reason. It says in the output that kube.service failed Job for k3s.service failed because the control process exited with error code.. I don't know if that is related. All the servers, a load balancer and firewall get created, but I don't know about k3s since I cannot connect to the cluster without kubeconfig. Also, I'm running this on Windows inside WSL2 if it makes a difference.

I would appreciate it if you could look into this.

cluster_config.yaml

---
hetzner_token: <TOKEN>
cluster_name: k3s
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.4+k3s1
ssh_key_path: "/tmp/.ssh/id_rsa.pub"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
masters:
  instance_type: cx11
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cx11
  instance_count: 1

Command and output:

docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.0 create-cluster --config-file /cluster/cluster_config.yaml

Creating firewall...
...firewall created.


Creating private network...
...private network created.


Creating SSH key...
...SSH key created.


Creating API load_balancer...
...API load balancer created.





Creating server k3s-cx11-master2...
Creating server k3s-cx11-master1...
Creating server k3s-cx11-master3...
Creating server k3s-cx11-pool-small-worker1...
...server k3s-cx11-master3 created.

...server k3s-cx11-master1 created.

...server k3s-cx11-master2 created.

...server k3s-cx11-pool-small-worker1 created.


Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
...server k3s-cx11-master1 is now up.
...server k3s-cx11-master2 is now up.
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
...server k3s-cx11-master3 is now up.
...server k3s-cx11-pool-small-worker1 is now up.

Deploying k3s to first master (k3s-cx11-master1)...
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

...k3s has been deployed to first master.

Deploying k3s to master k3s-cx11-master2...

Deploying k3s to master k3s-cx11-master3...
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.

...k3s has been deployed to master k3s-cx11-master2.

...k3s has been deployed to master k3s-cx11-master3.

Deploying k3s to worker (k3s-cx11-pool-small-worker1)...
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

...k3s has been deployed to worker (k3s-cx11-pool-small-worker1).

Deploying Hetzner Cloud Controller Manager...
...Cloud Controller Manager deployed

Deploying Hetzner CSI Driver...
...CSI Driver deployed

Deploying k3s System Upgrade Controller...
...k3s System Upgrade Controller deployed

hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:279:in `deploy_kubernetes': undefined method `[]' for nil:NilClass (NoMethodError)

I seem to have an issue with this ruby gem:

hetzner-k3s create-cluster --config-file cluster_config.yaml

Placement group already exists, skipping.


Creating firewall...
...firewall created.


Creating private network...
...private network created.


SSH key already exists, skipping.




Creating server kubernetes-cx11-pool-small-worker1...
Creating server kubernetes-cx21-pool-big-worker1...
Creating server kubernetes-cpx31-master1...
...server kubernetes-cx21-pool-big-worker1 created.

...server kubernetes-cx11-pool-small-worker1 created.

...server kubernetes-cpx31-master1 created.


Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
...server kubernetes-cx21-pool-big-worker1 is now up.
...server kubernetes-cpx31-master1 is now up.
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
...server kubernetes-cx11-pool-small-worker1 is now up.

Traceback (most recent call last):
        9: from /root/.rbenv/versions/2.7.4/bin/hetzner-k3s:23:in `<main>'
        8: from /root/.rbenv/versions/2.7.4/bin/hetzner-k3s:23:in `load'
        7: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/exe/hetzner-k3s:4:in `<top (required)>'
        6: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
        5: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
        4: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
        3: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
        2: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
        1: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:42:in `create'
/root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:279:in `deploy_kubernetes': undefined method `[]' for nil:NilClass (NoMethodError)

cannot connect to server

When I run hetzner-k3s create-cluster --config-file hetzner-cluster.yaml the programm finished in an error state. The connection to the server cannot be established. I'm using ssh-key to authenticate and the the key is already loaded to my keyring. After the creation of the servers, I can connect to them with ssh [email protected].

hetzner-k3s create-cluster --config-file hetzner-cluster.yaml

Firewall already exists, skipping.


Private network already exists, skipping.


SSH key already exists, skipping.


API load balancer already exists, skipping.





Server kubeedge-cpx21-master2 already exists, skipping.

Server kubeedge-cpx21-master1 already exists, skipping.

Server kubeedge-cpx21-master3 already exists, skipping.

Server kubeedge-cpx21-pool-small-worker1 already exists, skipping.


Waiting for server kubeedge-cpx21-pool-small-worker1 to be up...
Waiting for server kubeedge-cpx21-master3 to be up...
Waiting for server kubeedge-cpx21-master1 to be up...
Waiting for server kubeedge-cpx21-master2 to be up...
#<Thread:0x00007f16c80435e0 /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:146 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	11: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:146:in `block (2 levels) in create_resources'
	10: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:448:in `wait_for_ssh'
	 9: from /usr/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	 8: from /usr/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	 7: from /usr/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	 6: from /usr/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	 5: from /usr/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
	 4: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:453:in `block in wait_for_ssh'
	 3: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:453:in `loop'
	 2: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:454:in `block (2 levels) in wait_for_ssh'
	 1: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:468:in `ssh'
/var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:268:in `start': Authentication failed for user [email protected] (Net::SSH::AuthenticationFailed)

Unable to create cluster

Hello,

I tried to create cluster via docker (docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.4 create-cluster --config-file /cluster/cluster.yaml)

My config looks like this

---
hetzner_token: xxx
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
schedule_workloads_on_masters: false
masters:
  instance_type: cx11
  instance_count: 1
worker_node_pools:
- name: small
  instance_type: cx21
  instance_count: 1

But it gave me this error:


Placement group already exists, skipping.


Firewall already exists, skipping.


Private network already exists, skipping.


SSH key already exists, skipping.



Creating server test-cx11-master1...
Creating server test-cx21-pool-small-worker1...
...server test-cx11-master1 created.

...server test-cx21-pool-small-worker1 created.


#<Thread:0x000000400a44ec98 /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
�[1mTraceback�[m (most recent call last):
	7: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': �[1mundefined method `[]' for nil:NilClass (�[1;4mNoMethodError�[m�[1m)�[m
#<Thread:0x000000400a44eba8 /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
�[1mTraceback�[m (most recent call last):
	7: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': �[1mundefined method `[]' for nil:NilClass (�[1;4mNoMethodError�[m�[1m)�[m
�[1mTraceback�[m (most recent call last):
	7: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': �[1mundefined method `[]' for nil:NilClass (�[1;4mNoMethodError�[m�[1m)�[m

Use of dedicated servers

Hi,
Any plans or guidance to setup k3s on dedicated servers? I know you are running dynablogger on dedicated servers now 😀

question about annotation

Hi me again, yeah sorry I'm quite busy with the cluster lately
I applied the following custom configuration to the ingress nginx:

controller:
  kind: DaemonSet
  service:
    annotations:
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: kluster-ingress-nginx
      load-balancer.hetzner.cloud/use-private-ip: "true"
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'

but for some reason my oauth2proxy service is not working correctly. this page describes basically my issue. Now is see that I miss the hostname annotation. this could be related (not sure but I need to find out)
load-balancer.hetzner.cloud/hostname: <a valid fqdn>

Can you tell me what this should be?
the same as the cloud/name of the nginx ingress loadbalancer? (kluster-ingress-nginx) or the ip? but fqdn = only a name right?
or should this be the domain name I'm using in the cert-manager?

thanks again!

rancher

Hi Vito

short question ; do you run rancher on your own cluster?
I followed your instructions (here #13) to install it, but can't get it to work properly.
image

I get a 404 error
image

is there anything missing from the instructions? Which helm values are you using?
I don't think it has anything to do with creating the cluster, if you might have some time for it, I'll be very grateful!

status of the deployment, seems to be OK:

PS D:\kluster> kubectl -n cattle-system rollout status deploy/rancher
deployment "rancher" successfully rolled out

the deployments:

PS D:\kluster> kubectl get deployments -n cattle-system

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
rancher           3/3     3            3           10m
rancher-webhook   1/1     1            1           9m13s

the ingress:

PS D:\kluster> kubectl get ingress -n cattle-system

NAME                        CLASS    HOSTS                ADDRESS   PORTS     AGE
cm-acme-http-solver-bwjcq   <none>   rancher.mydomain.com             80        10m
rancher                     <none>   rancher.mydomain.com             80, 443   10m

certificate and ingress seems to be OK:

PS D:\kluster> kubectl describe ingress -n cattle-system

Name:             cm-acme-http-solver-bwjcq
Namespace:        cattle-system
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                Path  Backends
  ----                ----  --------
  rancher.mydomain.com  
                      /.well-known/acme-challenge/YrJZvovWuc7sAMX0UDoDU5iCKRix3nsIOE6dBndGlvU   cm-acme-http-solver-gk64f:8
089 (10.244.2.223:8089)
Annotations:          nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
Events:               <none>


Name:             rancher
Namespace:        cattle-system
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  tls-rancher-ingress terminates rancher.mydomain.com
Rules:
  Host                Path  Backends
  ----                ----  --------
  rancher.mydomain.com  
                         rancher:80 (10.244.1.245:80,10.244.2.221:80,10.244.2.222:80)
Annotations:          cert-manager.io/issuer: rancher
                      cert-manager.io/issuer-kind: Issuer
                      meta.helm.sh/release-name: rancher
                      meta.helm.sh/release-namespace: cattle-system
                      nginx.ingress.kubernetes.io/proxy-connect-timeout: 30
                      nginx.ingress.kubernetes.io/proxy-read-timeout: 1800
                      nginx.ingress.kubernetes.io/proxy-send-timeout: 1800
Events:
  Type    Reason             Age   From          Message
  ----    ------             ----  ----          -------
  Normal  CreateCertificate  11m   cert-manager  Successfully created Certificate "tls-rancher-ingress"

Certificate:

PS D:\kluster> kubectl get certificate -n cattle-system

NAME                  READY   SECRET                AGE
tls-rancher-ingress   True    tls-rancher-ingress   12m

all seems to be OK so I dont get it.sign

chmod on kubeconfig

Thanks for your project, it saved me a lot of time.

I got a WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: ~/kubeconfig., which I fixed using chmod go-r ~/.kube/config. May be you can implement something similar while write file.

ArgumentError when deploying Cloud Controller Manager

For some reason, my cluster creations have started failing recently at the Cloud Controller Manager step.

...snip...
...k3s has been deployed to worker (test-cpx21-pool-small-worker1).

Deploying Hetzner Cloud Controller Manager...
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
Traceback (most recent call last):
        13: from /usr/local/bin/hetzner-k3s:23:in `<main>'
        12: from /usr/local/bin/hetzner-k3s:23:in `load'
        11: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/exe/hetzner-k3s:4:in `<top (required)>'
        10: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
         9: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
         8: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
         7: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
         6: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
         5: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:42:in `create'
         4: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:320:in `deploy_cloud_controller_manager'
         3: from /var/lib/gems/2.7.0/gems/k8s-ruby-0.10.5/lib/k8s/resource.rb:38:in `from_files'
         2: from /var/lib/gems/2.7.0/gems/yaml-safe_load_stream-0.1.1/lib/yaml/safe_load_stream.rb:56:in `safe_load_stream'
         1: from /var/lib/gems/2.7.0/gems/yaml-safe_load_stream-0.1.1/lib/yaml/safe_load_stream.rb:16:in `safe_load_stream'
/var/lib/gems/2.7.0/gems/psych-4.0.1/lib/psych.rb:452:in `parse_stream': wrong number of arguments (given 2, expected 1) (ArgumentError)

delete-cluster deletes all resources in hetzner-cloud project!

hetzner-k3s delete-cluster --config-file cluster_config.yam

expected: Should delete all resources which where previous created with "create cluster"

actual: Deletes all resources in selected hetzner-cloud project! Backups included!

I have to say: I didn't like that!. Of course I forget to protect another resource which I had created just before. But I did not expect that deletion. Especially since that hadn't occured in the predecessor: https://github.com/vitobotta/hetzner-cloud-k3s

Improvement: HTTP.follow.get retries

Thanks for the excellent repo!

A little improvement that could save a lot of time for those living in China or other oppressed countries, would be to add retry policy to HTTP.follow.get calls (or add an option for air-gapped installation).

Github calls mostly work, but raw.githubusercontent.com is typically either blocked outright or DNS poisoned, so we have to run Docker with --network host + export https_proxy... to have anything downloaded, but that also sometimes requires 10+ reruns of the script.

Ruby installation issues

Hey Vito,

thanks for this amazing project! :)
I wanted to try a little bit around, but with Ubuntu 20.04 and standard-repo Ruby I get issues, that some packages are to new. Before I start digging into this, since I have exactly 0 Ruby knowledge, I'd like to hear at least someting about your Ruby runtime.

I'd be happy to enhance Readme afterwards :)

Kind Regards,
Nico

New Master not added to Cluster

Hi

Deleted my problematic Master node and reran create-cluster CLI but the node was created as a standalone K3s node.

kubeclt get nodes -- lists only the new master.

Undefined method `[]`

Hi. When I run the command hetzner-k3s create-cluster --config-file cluster_config.yaml I get a few of these errors:

Traceback (most recent call last):
        7: from /home/luna/.rvm/gems/ruby-2.6.0/gems/hetzner-k3s-0.3.5/lib/hetzner/k3s/cluster.rb:146:in `block (2 levels) in create_resources'
        6: from /home/luna/.rvm/gems/ruby-2.6.0/gems/hetzner-k3s-0.3.5/lib/hetzner/k3s/cluster.rb:448:in `wait_for_ssh'
        5: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:108:in `timeout'
        4: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:33:in `catch'
        3: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:33:in `catch'
        2: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:33:in `block in catch'
        1: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout'
/home/luna/.rvm/gems/ruby-2.6.0/gems/hetzner-k3s-0.3.5/lib/hetzner/k3s/cluster.rb:449:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)

Not sure how to fix or what to even do. The system I'm running it from is Manjaro. I updated hetzner-k3s to the latest and still had the issue.

Edit: Fixed the codeblock.

is downgrade k3s possible?

Hi!
a quick question because the readme only describes the k3s upgrade process. Is downgrade also possible?
the reason why Iam asking is, I have the 1.22.3 + k3s1 version but Rancher requires < 1.22.0-0

PS C:\kluster> helm upgrade --install --namespace cattle-system --set hostname=rancher.mydomain.com --set ingress.tls.source=letsEncrypt --set [email protected] rancher rancher-stable/rancher

Release "rancher" does not exist. Installing it now.
helm : Error: chart requires kubeVersion: < 1.22.0-0 which is incompatible with Kubernetes v1.22.3+k3s1
At line:1 char:1
+ helm upgrade --install --namespace cattle-system --set hostname=ranch ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (Error: chart re...es v1.22.3+k3s1:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError

Load balancer annotations.

Edit: Made some parts easier to read.

This isn't an issue with this script or anything, it seems that I'm just very stupid 😓, I don't know where else to ask except here since using this script and easier point to go from.

So, I give up lol. I've looked through so many docs and guides and what not but, I'm so confused how I'm suppose to get rancher to work. I mean, rancher and cert-manager boots up and the pods gets ready, but the whole load balancers and that confuses the hell out of me. Load Balancers It's that part that confuses me a lot. Where I would place those and so on.

I thought I made progress when I found The load balancer README on hcloud CCM, but even that confuses me and I'm not sure how I'm suppose to set it all up going from this script.
I do feel like I still made some progress, specially also using Lens as that made it a bit easier to have a look at everything going on with the cluster.

I assume the hostname would be load.example.com with setting up the DNS on cloudflare, with type A where load.example.com points to Load balancer Public IP. I might be wrong with even that.

So basically, would be forever grateful with some help and guidance with this. Cause either I'm missing something in all docs and guides I've been reading, or I can't read, or I'm simply just stupid lol.

Error on existing ssh key

Creating SSH key...
...SSH key created.

Traceback (most recent call last):
	10: from /usr/local/bin/hetzner-k3s:23:in `<main>'
	 9: from /usr/local/bin/hetzner-k3s:23:in `load'
	 8: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/exe/hetzner-k3s:4:in `<top (required)>'
	 7: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
	 6: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
	 5: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	 4: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
	 3: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cli.rb:20:in `create_cluster'
	 2: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:34:in `create'
	 1: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:92:in `create_resources'
/var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/infra/ssh_key.rb:26:in `create': undefined method `[]' for nil:NilClass (NoMethodError)

works after i deleted my ssh key in the hetzner gui

wait until route to host is up

i know this from my ansible scripts. just retry 3 times and everything runs

Traceback (most recent call last):
	14: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144:in `block (2 levels) in create_resources'
	13: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `wait_for_ssh'
	12: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `loop'
	11: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:439:in `block in wait_for_ssh'
	10: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:452:in `ssh'
	 9: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	 8: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	 7: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:73:in `initialize'
	 6: from /usr/lib/ruby/2.7.0/socket.rb:632:in `tcp'
	 5: from /usr/lib/ruby/2.7.0/socket.rb:227:in `foreach'
	 4: from /usr/lib/ruby/2.7.0/socket.rb:227:in `each'
	 3: from /usr/lib/ruby/2.7.0/socket.rb:642:in `block in tcp'
	 2: from /usr/lib/ruby/2.7.0/socket.rb:137:in `connect'
	 1: from /usr/lib/ruby/2.7.0/socket.rb:64:in `connect_internal'
/usr/lib/ruby/2.7.0/socket.rb:64:in `connect': No route to host - connect(2) for 116.203.69.124:22 (Errno::EHOSTUNREACH)

little help needed

sorry, Ive been very busy recently, but Ive got time now 👍
I still have the problem that I can't reach the hello-world on my own domain, this works well with portforward. I doubt very much whether it is because of the way the cluster is made (firewall, loadbalancers etc) or whether it is my own fault.
it would mean a lot to me if you could help me!

How I created the cluster:

---
hetzner_token: myapikeyisremoved
cluster_name: kluster
kubeconfig_path: "/cluster/kubeconfig"
k3s_version: v1.22.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
schedule_workloads_on_masters: false
masters:
  instance_type: cpx21
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 2

I executed the following commands to deploy nginx:

  1. helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  2. helm repo update
  3. helm upgrade --install --namespace ingress-nginx --create-namespace -f C:\kluster\ingress-nginx.yaml ingress-nginx ingress-nginx/ingress-nginx

How the ingress-nginx.yaml look like:

controller:
  kind: DaemonSet
  service:
    annotations:
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: kluster-ingress-nginx
      load-balancer.hetzner.cloud/use-private-ip: "true"

So Iam trying to deploy this hello-world app, it is basically this file:
https://gist.githubusercontent.com/vitobotta/6e73f724c5b94355ec21b9eee6f626f1/raw/3036d4c4283a08ab82b99fffea8df3dded1d1f78/deployment.yaml
One thing changed though:

spec:
  rules:
  - host: mydomain.com

Everything is running well, when I Portforward the pod is gives me the Hello-world page.
But I cant get this to work on my own domain...

When I descibe the hello-world it shows me 10.43.229.204, I think an internal IP right?

PS C:\kluster> kubectl describe service hello-world -n ingress-nginx
Name:              hello-world
Namespace:         ingress-nginx
Labels:            <none>
Annotations:       <none>
Selector:          app=hello-world
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.229.204
IPs:               10.43.229.204
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.4.7:80
Session Affinity:  None
Events:            <none>

When I describe the ingress used by Hello-world it gives me Host mydomain.com, so this seems to be OK right?

PS C:\kluster> kubectl describe ingress hello-world -n ingress-nginx
Name:             hello-world
Namespace:        ingress-nginx
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  mydomain.com  
              /   hello-world:80 (10.244.4.7:80)
Annotations:  <none>
Events:       <none>

When I execute this one, it gives me the IP of the external IP of the loadbalancer (162.55.152.65)
This is the IP that I need to use in the DNS settings right?

PS C:\kluster> kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP                                   PORT(S)      
                AGE
hello-world                          ClusterIP      10.43.229.204   <none>                                        80/TCP       
                15m
ingress-nginx-controller             LoadBalancer   10.43.131.110   10.0.0.8,162.55.152.65,2a01:4f8:1c1d:201::1   80:30584/TCP,
443:31524/TCP   22m
ingress-nginx-controller-admission   ClusterIP      10.43.59.144    <none>                                        443/TCP      
                22m

How my DNS settings look like, so basically the same IP of the public loadbalancer (ingress-nginx-controller)

A | mydomain.com | 162.55.152.65
A | www | 162.55.152.65

But when I go to mydomain.com it gives me an error, nothing is there to see...
Is the IP correct that Iam trying to use at the DNS settings?
or do I need to add some annotations?
thanks in advance

Originally posted by @jboesh in #30 (comment)

Allow to configure private ssh key

Right now the tool fallbacks to the ssh keys provided by the ssh agent.
This can become complicated when running on a CI server, as the agent does not necessarily contain the correct keys.
It would be great to have an option that sets the private ssh key to be used (ideally both as value in the config and as env variable).

username and password to access the server (for the volume). I need to edit a file in a volume

Hi!

I need to edit some values in a config file, which is saved on a volume (On a specific Hetzner cloud server)
what is the way to do this?
When I logon to Hetzner cloud, volumes and click on the cloud server and 'Console', it is asking for an username and password.
These are not the Hetzner cloud credentails.
image

a reply from their support

As mentioned the login for a cloud server is always "root".

If you created the ssh key after creating the server it is not deposited on the remote system. This means you have to reset the password of your server.

If you have created the server and selected the ssh key during initial setup you can login without a password.

does your script generate the ssh key before or after the creating of the server?
anyway, when I try some combinations (username Root and or without password), none of them works

What am I doing wrong?

Can I reset the password without any problems? the server should just keep working from the cluster

invalid byte sequence in UTF-8 (ArgumentError)

Hi @vitobotta,

i tried it out and all resources are created (loadbalancer, servers, network), but after that it crashes with the following error:

Traceback (most recent call last):
	28: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:147:in `block (2 levels) in create_resources'
	27: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:425:in `wait_for_ssh'
	26: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	25: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	24: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	23: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	22: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
	21: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:430:in `block in wait_for_ssh'
	20: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:430:in `loop'
	19: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:431:in `block (2 levels) in wait_for_ssh'
	18: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:445:in `ssh'
	17: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	16: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	15: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:88:in `initialize'
	14: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:88:in `new'
	13: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:153:in `initialize'
	12: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:277:in `prepare_preferred_algorithms!'
	11: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:98:in `host_keys'
	10: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:55:in `search_for'
	 9: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `search_in'
	 8: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `flat_map'
	 7: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `each'
	 6: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `block in search_in'
	 5: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:131:in `keys_for'
	 4: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:131:in `open'
	 3: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:132:in `block in keys_for'
	 2: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:132:in `each_line'
	 1: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:133:in `block (2 levels) in keys_for'
/usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:133:in `split': invalid byte sequence in UTF-8 (ArgumentError)

I have tried bot gem installation (rvm with ruby 2.7.4) and the docker image. I am on MacOS M1.
I also checked the config file: yml lint and UTF-8 encoding. Any idea what might cause this?

Little help needed for my https deployment on own domain

Hi,
first off all:
sorry for the stupid questions, maybe not quite related to what you made but i could really use a little help right now.

I have created the cluster (not via the Docker way) and am trying to use my own (sub)domains in my deployments. on http I get this to work but unfortunately not on https.
I also have to say that the documentation on Hetzner Cloud Controller Manager isn't very good either... I can't find any good instructions on the Internet.

You shared an example of your Service annotations:

  service:
    annotations:
      load-balancer.hetzner.cloud/hostname: <a valid fqdn>
      load-balancer.hetzner.cloud/http-redirect-https: 'false'
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: <lb name>
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'
      load-balancer.hetzner.cloud/use-private-ip: "true"

but shouldn't these 2 lines be added too?

load-balancer.hetzner.cloud/http-certificates
load-balancer.hetzner.cloud/protocol

But when I add the protocol line in the Service, the loadbalancer is crashing in the Hetzner cloud.

Anyway, this is how my whoami.yaml deployment file looks like:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: containous/whoami
  selector:
    matchLabels:
      app: whoami
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  labels:
    app: whoami
  annotations:
    load-balancer.hetzner.cloud/hostname: 'whoami.mydomain.com'
    load-balancer.hetzner.cloud/http-redirect-https: 'false'
    load-balancer.hetzner.cloud/location: 'nbg1'
    load-balancer.hetzner.cloud/name: 'a0377f5249b9myidd34a497800858'
    load-balancer.hetzner.cloud/uses-proxyprotocol: 'false'
    load-balancer.hetzner.cloud/use-private-ip: 'true'
spec:
  type: LoadBalancer
  ports:
    - port: 443
      targetPort: 80
  selector:
    app: whoami

Then at Cloudflare to manage the DNS, I've created an A-record pointing mydomain.com to the IP of the loadbalancer
And a second A-record for whoami.mydomain to the same IP of the loadbalancer. Not sure if both are needed though.

When I apply the deployment a loadbalancer is created and the whoami service gets available at:

http://whoami.dockerjourney.ovh:443/
and http://loadbalancerip:443

but NOT on the https port haha ... Is Let's encrypt not included in the Hetzner Cloud Controller Manager?

Maybe something needs to be set manually in Hetnzer cloud? for example at Loadbalancer, Networking, PUBLIC NETWORK. you can fill in a Reserved DNS name here. But Iam not sure...
Or do I need to Create a certificate in the Hetnzer cloud, and then use the Service annotations?

Thanks in advance for your help, I would really appreciate it

Timeout when trying to create +9 servers at once

Hi!
I found that when I tried to create 9 servers as per your example, I will always get only 8 servers created and the script timeouts when the 9th "waiting for server to be up" (also the script exits before completion).

#<Thread:/home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146 run> terminated with exception (report_on_exception is true): Traceback (most recent call last): 7: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146:in block (2 levels) in create_resources' 6: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:448:in wait_for_ssh' 5: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:110:in timeout' 4: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 3: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 2: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in block in catch' 1: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:95:in block in timeout' /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:449:in block in wait_for_ssh': undefined method []' for nil:NilClass (NoMethodError) #<Thread:/home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146 run> terminated with exception (report_on_exception is true): Traceback (most recent call last): 7: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146:in block (2 levels) in create_resources' 6: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:448:in wait_for_ssh' 5: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:110:in timeout' 4: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 3: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 2: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in block in catch' 1: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:95:in block in timeout' /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:449:in block in wait_for_ssh': undefined method []' for nil:NilClass (NoMethodError)

I tried with 6 servers and it worked well. :)

Api-Platform integration issues

I have been procrastinating for a long time now before creating this thread and I know it's not a mistake on your part but I don't know where else to ask.

I have created a cluster with your tool and everything works so far. I use API-Platform and am now trying to integrate a load balancer. My expected result is that a load balancer is created and it has my worker nodes as a service.

In the values.yaml I have tried to insert the following in all places, always under annotations.

    load-balancer.hetzner.cloud/name: cluster-name-ingress-nginx
    load-balancer.hetzner.cloud/use-private-ip: "true"

among others the following

ingress:
  enabled: true
  annotations:
    load-balancer.hetzner.cloud/hostname: "random-hostname.com"
    load-balancer.hetzner.cloud/http-redirect-https: 'false'
    load-balancer.hetzner.cloud/location: nbg1
    load-balancer.hetzner.cloud/health-check-port: "30787"
    #load-balancer.hetzner.cloud/name: "backend-lb-prod"
    load-balancer.hetzner.cloud/uses-proxyprotocol: 'true' #load-balancer.hetzner.cloud/name: 'backend-lb-prod
    load-balancer.hetzner.cloud/use-private-ip: 'true'
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: 'true'
  hosts:
    - host: random-hostname.com
      paths:
        - path: "/"
          pathType: "Prefix
          backend:
            service:
              name: "test"
              port:
                number: 80
  tls: []

Since it didn't work, I also tried to write the whole thing directly into the ingress.yaml under the points as explained in the instructions.

After deployment, no load balancer is created.

Is there anyone here who can help me? I would be very grateful, I've been dealing with this for over a week now and can't get any further. I think other API Platform users would also appreciate it.

EDIT:
If i run
kubectl describe ingress main-api-platform

I got

Name:             main-api-platform
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  random-hostname.com  
                       /   main-api-platform:80 (10.244.2.20:80)
Annotations:           kubernetes.io/ingress.class: nginx
                       kubernetes.io/tls-acme: true
                       load-balancer.hetzner.cloud/health-check-port: 30787
                       load-balancer.hetzner.cloud/hostname: random-hostname.com
                       load-balancer.hetzner.cloud/http-redirect-https: false
                       load-balancer.hetzner.cloud/location: nbg1
                       load-balancer.hetzner.cloud/use-private-ip: true
                       load-balancer.hetzner.cloud/uses-proxyprotocol: true
                       meta.helm.sh/release-name: main
                       meta.helm.sh/release-namespace: default
Events:                <none>

Dont know this info helps but seems like it takes the annotations... I guess on the wrong place?

Hetzner Cloud Console Account got closed down!

I used this create Cluster Script - see its output downstairs.

Hetzner as a result closed our Cloud Console Account, because the "Script / Tool"
was disfunctional and made too many API Requests in the Hetzner API System.

They wrote that the script:

"Es scheint so, als ob Sie viele API Requests zum Anlegen von Servern erzeugt haben, wo der Name aber bereits an einen anderen Server vergeben wurde. Können Sie dies einmal überprüfen? Anschließend würden wir den Zugriff auf die API wieder freischalten.

Wir arbeiten bereits an einer Lösung bei uns, sodass solche API Requests keine solchen Auswirkungen mehr hat.
"

Cheers,
rené


hetzner_token: "ea2*********************w"
cluster_name: "zooo-k3s"
kubeconfig_path: "../kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: '/.ssh/id_rsa.pub'
private_ssh_key_path: '
/.ssh/id_rsa'
ssh_allowed_networks:

  • 0.0.0.0/0
    verify_host_key: false
    location: fsn1
    masters:
    instance_type: cpx11
    instance_count: 3
    worker_node_pools:
  • name: small
    instance_type: cpx11
    instance_count: 4
  • name: big
    instance_type: cpx21
    instance_count: 2

OUTPUT:

Creating server zooo-k3s-cpx11-master1...
Creating server zooo-k3s-cpx11-pool-small-worker3...
Creating server zooo-k3s-cpx11-master3...
Creating server zooo-k3s-cpx11-master2...
Creating server zooo-k3s-cpx21-pool-big-worker1...
Creating server zooo-k3s-cpx11-pool-small-worker2...
Creating server zooo-k3s-cpx11-pool-small-worker1...
Creating server zooo-k3s-cpx11-pool-small-worker4...
Creating server zooo-k3s-cpx21-pool-big-worker2...
...server zooo-k3s-cpx11-pool-small-worker1 created.

Error creating server zooo-k3s-cpx11-master1. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:17 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1259", "ratelimit-reset"=>"1637088138", "x-correlation-id"=>"47ca9b2d-a61c-42fc-a263-52bc6b6be110", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-pool-small-worker3. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:17 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1258", "ratelimit-reset"=>"1637088139", "x-correlation-id"=>"b37320da-4f11-4309-96f0-086424565fc4", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx21-pool-big-worker2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:20 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1258", "ratelimit-reset"=>"1637088142", "x-correlation-id"=>"4c983418-1542-4e4e-8793-535dee5b3040", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-pool-small-worker4. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:20 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1257", "ratelimit-reset"=>"1637088143", "x-correlation-id"=>"02156b5c-be08-4009-b0bb-7e0f918bb145", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-master2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:22 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1256", "ratelimit-reset"=>"1637088146", "x-correlation-id"=>"eb8df639-e8b9-48e3-8e2e-3ac7ef5dcf07", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx21-pool-big-worker1. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:22 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1255", "ratelimit-reset"=>"1637088147", "x-correlation-id"=>"b450319d-f78f-4fd1-b5e1-d53dcfcb75cc", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-pool-small-worker2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:26 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1253", "ratelimit-reset"=>"1637088153", "x-correlation-id"=>"7fa772e7-8629-49fd-86e6-2109707ba68e", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-master3. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:28 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1253", "ratelimit-reset"=>"1637088155", "x-correlation-id"=>"09da4401-3890-48f6-8627-ea28f5cebb90", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>

Stuck on Pending and CrashLoopBackOff

More issues haha.
I follow the guide and I showed my yaml in the other issue I opened.
When I run the command to get all pods in all namespaces, this is the result:

NAMESPACE        NAME                                              READY   STATUS             RESTARTS   AGE
kube-system      coredns-7448499f4d-5zwgx                          0/1     Pending            0          18m
kube-system      hcloud-cloud-controller-manager-9546b6cc6-8wgrs   1/1     Running            0          17m
kube-system      hcloud-csi-controller-0                           0/5     Pending            0          17m
kube-system      hcloud-csi-node-bb5pw                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-bgqfx                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-ht5d7                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-nzbw4                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-vhlkg                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-znmzx                             2/3     CrashLoopBackOff   9          17m
system-upgrade   system-upgrade-controller-677965cc4d-cdrvp        0/1     Pending            0          17m

It stays like that and it's the same if I install cert-manager, it just stays pending. The output in the codeblock above is from newely created Cluster, the one I created very first after fixing the last issue, is where I saw that it's been like this since it was created.

When I run the command to describe the pods, this is the message most of them says (give or take a few changes like the ready numbers):

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  5m38s  default-scheduler  0/6 nodes are available: 6 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  5m36s  default-scheduler  0/6 nodes are available: 6 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.

Not really sure how to fix this.

Another question I got since I've seen k3s with external database, is that something I still need to setup using this way?
I'm still fairly new with all of this 😅

issues creating the cluster -

Hi again, well I am giving it another try as I am following the project and seems I am stuck on something here .. while I can see the resources created on the cloud side ... seems like something doesn't follow along ....

jc@infra-0:~$ hetzner-k3s create-cluster --config-file cluster_config.yaml

Placement group already exists, skipping.
Firewall already exists, skipping.
Private network already exists, skipping.
SSH key already exists, skipping.
API load balancer already exists, skipping.


Creating server k3s-wireguard-cpx21-master1...
Creating server k3s-wireguard-cpx21-pool-small-worker2...
Creating server k3s-wireguard-cpx21-pool-small-worker3...
Creating server k3s-wireguard-cpx21-master3...
Creating server k3s-wireguard-cpx21-pool-small-worker1...
Creating server k3s-wireguard-cpx21-master2...
...server k3s-wireguard-cpx21-pool-small-worker3 created.

Error creating server k3s-wireguard-cpx21-master2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Wed, 01 Dec 2021 18:28:58 GMT", "Content-Type"=>"application/json", "Content-Length"=>"202", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"3583", "ratelimit-reset"=>"1638383354", "x-correlation-id"=>"74604510-3a34-4d9b-ad6f-815e3ece5f4a", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server k3s-wireguard-cpx21-pool-small-worker1. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Wed, 01 Dec 2021 18:28:58 GMT", "Content-Type"=>"application/json", "Content-Length"=>"202", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"3584", "ratelimit-reset"=>"1638383353", "x-correlation-id"=>"0a07d11e-a88e-4e57-876f-ad7722144ce2", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
...server k3s-wireguard-cpx21-pool-small-worker2 created.

...server k3s-wireguard-cpx21-master3 created.

...server k3s-wireguard-cpx21-master1 created.


Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
#<Thread:0x00007ff16c059cf8 /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168:in `block (2 levels) in create_resources'
	6: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:460:in `wait_for_ssh'
	5: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:461:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
#<Thread:0x00007ff16c059c08 /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168:in `block (2 levels) in create_resources'
	6: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:460:in `wait_for_ssh'
	5: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:461:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
...server k3s-wireguard-cpx21-master1 is now up.
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
...server k3s-wireguard-cpx21-pool-small-worker3 is now up.
Traceback (most recent call last):
	7: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168:in `block (2 levels) in create_resources'
	6: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:460:in `wait_for_ssh'
	5: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:461:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
jc@infra-0:~$

My configuration looks like:

cat << 'EOF' > cluster_config.yaml
---
hetzner_token: "2qSTPIXRedactedY7l7hQ4L"
cluster_name: k3s-wireguard
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: hel1
schedule_workloads_on_masters: false
masters:
  instance_type: cpx21
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 3
EOF

My env looks like:

jc@infra-0:~$ docker --version
Docker version 20.10.11, build dea9396
jc@infra-0:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
jc@infra-0:~$ ruby -v
ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
jc@infra-0:~$

So as you can see I can connect over ssh like:

jc@infra-0:~$ ssh -i ~/.ssh/id_rsa [email protected]
Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-90-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed 01 Dec 2021 06:38:15 PM UTC

  System load:             0.04
  Usage of /:              2.1% of 74.82GB
  Memory usage:            5%
  Swap usage:              0%
  Processes:               150
  Users logged in:         0
  IPv4 address for enp7s0: 10.0.0.6
  IPv4 address for eth0:   95.217.21.109
  IPv6 address for eth0:   2a01:4f9:c010:3690::1


5 updates can be applied immediately.
4 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable


Last login: Wed Dec  1 18:37:34 2021 from 95.111.222.111
root@k3s-wireguard-cpx21-pool-small-worker2:~#

Thanks!

Having problems with NodePort

Hi there, loving this tool!

I'm having some problems getting NodePort services working. For some reason, even if I disable the automatically created firewalls, attempting to access a service using a NodePort configuration doesn't seem to work. Do you have any suggestions?

Further Access Restrictions using the firewall

Hey,
the tool creates a loadbalancer that forwards 6443 to the internal network.
As far as I understands, this makes the firewall rule that allows access to 6443 obsolete and it should be removed, so it's not possible to access the nodes directly.

Furthermore I think it would be helpful to have an option, to restrict access to SSH to specific IPs (e.g. company network) to further harden the cluster.

NetworkUnavailable - Could not create route

Hi,

thank you So mutch for your Work!!!!

After i Create the K3s Cluster, i Becom on every Node te Message, that the Network is Unavailable.
(combined from similar events): Could not create route ddb43c79-3777-40bb-9a53-7d75689f101f 10.244.0.0/24 for node agirancher-cpx21-master1 after 2.331535684s: hcloud/CreateRoute: hcops/AllServersCache.ByName: agirancher-cpx21-master1 hcops/AllServersCache.getCache: not found

do you have any idea what the problem can be ?

Thanks,
Basti

Pass API Token as Environment Variable

Hey, thank you for this awesome tool, saved me a lot of headache :)

I'd like to run the tool from a CI to create multiple clusters. Is there any way to pass the API Token as Environment Variable (HCLOUD_TOKEN) instead of the config?

Cheers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.