Releases: vitobotta/hetzner-k3s
Releases · vitobotta/hetzner-k3s
v2.2.2
Improvements
- We now use the kube context of the first master to install the software, then only switch to the load balancer context at the very end, if it’s available. This approach helps because the load balancer might take some time to become healthy, which could otherwise slow down the installation process.
- Added an exponential backoff mechanism for cases where instance creation fails, such as when the selected instance types aren’t available in the chosen locations. This should help handle temporary issues more effectively.
- Added a new
--force
option to thedelete
command. If you set it totrue
, the cluster will be deleted without any prompts. This is really handy for automated operations.
Fixes
- Fixed an issue where the
create
command would time out before setting up the cluster autoscaler. This happened when there were no static worker node pools configured. - Fixed an issue that surfaces when using an existing private network with subnet size other than /16 - by @ValentinVoigt
Upgrading from v2.1.0
See instructions for v2.2.0.
v2.2.1
Improvements
- We now use the kube context of the first master to install the software, then only switch to the load balancer context at the very end, if it’s available. This approach helps because the load balancer might take some time to become healthy, which could otherwise slow down the installation process.
- Added an exponential backoff mechanism for cases where instance creation fails, such as when the selected instance types aren’t available in the chosen locations. This should help handle temporary issues more effectively.
- Added a new
--force
option to thedelete
command. If you set it totrue
, the cluster will be deleted without any prompts. This is really handy for automated operations.
Fixes
- Fixed an issue where the
create
command would time out before setting up the cluster autoscaler. This happened when there were no static worker node pools configured.
Upgrading from v2.1.0
See instructions for v2.2.0.
v2.2.0
New
- Added support for the Singapore location.
- We’ve reintroduced the option to create a load balancer for the Kubernetes API, but this time it’s optional and turned off by default. If you want to use it, you can enable it by setting
create_load_balancer_for_the_kubernetes_api: false
. Just a heads-up: the load balancer was removed a few versions back because Hetzner doesn’t yet support load balancers in their firewalls. This means you can’t restrict access to the Kubernetes API when using a load balancer. However, since some users asked for it, we’ve brought it back for flexibility. You can now enable it if needed!
Fixes
- Fixed a problem that caused extra placement groups to be created.
- Resolved an issue where pagination was missing when fetching SSH keys in projects with more than 25 keys.
- Fixed the assignment of labels and taints to nodes.
Improvements
- We took out the library we were using for SSH sessions because it occasionally caused issues with certain keys. Those problems were tricky to figure out and fix. Now, we’re using the standard
ssh
binary that comes with the operating system to run commands on remote nodes. This change should help prevent those strange compatibility problems that popped up with some keys or environments. - The cached list of available k3s versions now refreshes automatically if the cache is older than 7 days.
- The system now waits for at least one worker node to be ready before installing the Cluster Autoscaler. This prevents premature autoscaling when creating a new cluster. Previously, the Cluster Autoscaler was installed before worker nodes were ready, which could trigger autoscaling as soon as pending pods were detected. Reference.
- For consistency, autoscaled node pools now include the cluster name as a prefix in node names, similar to static node pools.
- Added a confirmation prompt before deleting a cluster to avoid accidental deletion when using the wrong config file.
- Clusters are now protected from deletion by default as an additional measure to prevent accidentally deleting the wrong one. If you're working with test or temporary clusters and need to delete them, you can disable this protection by setting
protect_against_deletion: false
in the configuration file. - Added a confirmation prompt before upgrading a cluster to prevent accidentally upgrading the wrong cluster.
- Improved exception handling during the software installation phase. Previously, a failure in installing a software component could stop the setup of worker nodes.
- Disabled the
local-path
storage class by default to avoid conflicts where k3s automatically sets it as the default storage class. - The tool no longer opens firewall ports for the embedded registry mirror if a private network is available.
- Made the image tag for the Cluster Autoscaler customizable using the setting
manifests.cluster_autoscaler_container_image_tag
. - Autoscaled nodes are now considered when determining upgrade concurrency.
- Added error and debugging information when SSH sessions to nodes fail.
Miscellaneous
- Upgraded the System Upgrade Controller to the latest version.
- Upgraded the Hetzner CSI Driver to the latest version.
- Upgraded the Hetzner Cloud Controller Manager to the latest version.
- Upgraded the Cluster Autoscaler to the latest version.
- Upgraded Cilium to the latest version.
Upgrading from v2.1.0
- If you have active autoscaled node pools (pools with one or more nodes currently in the cluster), you need to set the property
include_cluster_name_as_prefix
tofalse
for those pools due to the naming convention change mentioned earlier. - If you are using the
local-path
storage class, you need to setlocal_path_storage_class.enabled
totrue
. - If you'd rather use a load balancer for the Kubernetes API instead of constantly switching between contexts, you can enable it by setting
create_load_balancer_for_the_kubernetes_api: true
. After that, just run thecreate
command to set up the load balancer.
v2.1.0
Improvements
- This update lets different types of instances coexist within the same node pool. This will make it easier for older clusters to transition from the 1.1.5 naming system, which included instance type in the name, to the newer 2.x naming scheme that doesn’t include this detail.
Upgrading
Important: See notes for v2.0.0 if you are upgrading from v1.1.5.
v2.0.9
v2.0.8
Fixed
- Fixed an issue preventing correct detection of the private network interface for autoscaled nodes
Upgrading
Important: See notes for v2.0.0 if you are upgrading from v1.1.5.
v2.0.7
Fixed
- Temporarily switched to a custom autoscaler image by Hetzner. The official image still relies on an instance type that has been deprecated
Upgrading
Important: See notes for v2.0.0 if you are upgrading from v1.1.5.
v2.0.6
Fixed
- Fixed an issue with the JSON requests to Hetzner introduced by the upgrade of the Crest library in 2.0.5
Upgrading
Important: See notes for v2.0.0 if you are upgrading from v1.1.5.
v2.0.5
Improvements
- Configured memory requests for Cilium
Fixes
- Upgraded some shards to fix an issue with OpenSSL issue on Fedora (by @n7st)
Upgrading
Important: See notes for v2.0.0 if you are upgrading from v1.1.5.
v2.0.4
Improvements
- Changed the way we detect the current IP address to use IpInfo, since it works in location like China where the previous method based on an Akamai service wasn't working
Upgrading
Important: See notes for v2.0.0 if you are upgrading from v1.1.5.