v0.15.0-beta.0
Pre-release- Deprecate cluster-manifests in favor of cluster-service-modules #199
- EKS: Allow setting Kubernetes version - thanks @Spazzy757 #188
- AKS: Allow setting Kubernetes version #200
Upgrade Notes
Like any Kubestack upgrade, change the version of your cluster module(s) and the image tag in the Dockerfiles. The depreciation of the cluster-manifests additionally requires manual changes. While we strive to avoid high effort migrations like this, in this case the benefits outweigh the downsides drastically. Because Kubestack is still in beta, we also decided to not take on the long term maintenance effort of providing a backwards compatible alternative for such a significant change.
The previous approach of having Kustomize overlays defined under manifests/
and have the cluster modules implicitly provision the resulting Kubernetes resources had two major drawbacks:
- It required every team member to learn about both Terraform and Kustomize. And their opposing paradigms caused significant mental overhead for every change.
- It also meant that, because manifests were defined using YAML, it was not easily possible to customize the Kubernetes resources based on values coming from Terraform.
With this release, Kubestack cluster modules do not provision the YAML in manifests/
implicitly anymore. Instead, all catalog services are now available as first class Terraform modules. In addition, there is a new custom-manifests module, that can be used to provision your bespoke YAML in the same way as the modules from the catalog.
This change simplifies the mental model by both clusters and cluster services simply being Terraform modules now. At the same time, because cluster-service-modules still use Kustomize under the hood, the benefit of low effort maintenance to follow new upstream releases is preserved. But because the overlay is now defined dynamically using HCL, you can now fully customize all Kubernetes resources from Terraform values.
To learn more about how these modules allow full customization of the Kubernetes resources from Terraform, check the detailed documentation.
Overview
There are three cases to consider for this upgrade:
- For services from the catalog, migrate to using the dedicated module. Usage instructions are provided on each service's catalog page.
- For bespoke YAML, consider using the custom-manifests module or use the Kustomization provider directly. The module uses the explicit
depends_on
approach internally and simplifies the Terraform configuration in your root module. It is possible to use the custom-manifest module to apply an entire overlay frommanifests/overlays
as is. But it's recommended to call the module once for each base instead, to clearly separate independent Kubernetes resources from each other in the Terraform state. - For the ingress setup, migrating to the dedicated module requires using the nginx ingress cluster service module and setting it up to integrate with the default IP/DNS ingress setup. Refer to the AKS, EKS and GKE specific migration instructions for the required code changes.
Migration strategies
In all three cases, Terraform will generate destroy and recreate plans for all affected Kubernetes resources, because even though the Kubernetes resources don't change, their location in the Terraform state changes. You have two options here. You can either plan a maintenance window and run the destroy and recreate apply. Or you can manually terraform state mv
resources in the state until Terraform does not generate a destroy and recreate plan anymore. And only then run the apply.
Ingress specifics
For the ingress migration, you will in any case have a small service disruption, because the service type loadbalancer needs to be switched to fix the issue with two cloud loadbalancers being created. This destroys the old LB, creates a new and switches DNS over. During testing, this caused 5-10 minutes of downtime. For critical environments and least disruption, we suggest lowering the DNS TTL ahead of time, and let the decrease propagate before starting the migration during a maintenance window. In AWS' case, the new ELB gets a new CNAME. Since Kubestack uses Route53 Alias records, the disruption is kept to a minimum. For both Azure and Google Cloud, Kubestack uses reserved IP addresses. These will be reused by the new cloud loadbalancers, leaving DNS unchanged. But for all three providers, it will take a bit of time until the new loadbalancers are back in service.