This repository contains configuration for deploying and managing one or more vClusters on a host Kubernetes cluster using a GitOps workflow via Flux CD. The management of the host cluster is beyond the scope of this repository.
By default, Kubernetes does not provide strong multi-tenancy guarantees. In particular, custom resource definitions (CRDs), which are being increasingly adopted, are cluster scoped, meaning that in practice, tenants cannot be permitted to access CRDs without risk of interference with each other.
vCluster allows multiple "virtual" Kubernetes clusters to be deployed on a single host Kubernetes cluster. Broadly speaking, these clusters take the form of a dedicated Kubernetes control plane, with its own storage, and a "syncer" that is responsible for synchronising low-level resources from the virtual cluster to the underlying cluster, e.g. pods and services.
Each vCluster runs inside a namespace on the host cluster, and pods created by the syncer are created in that namespace on the host cluster regardless of their namespace in the vCluster. This means that vClusters can be isolated from each other using Kubernetes features such as pod security standards, resource quotas, limit ranges and network policies. vClusters can even be configured to target different nodes in the host cluster using node labels.
For more information, see the vCluster documentation.
To provision vClusters using this repository, you must first have access to a host cluster.
The host cluster must have a CNI that supports network policies (e.g. Cilium or Calico) and a storage class that can be used to provision the storage volume for each vCluster (ideally backed by SSD).
This repository uses ingresses to expose the API servers for the vClusters (although other options are available). This requires that the host cluster is running an ingress controller with SSL passthrough enabled.
This repository assumes that ingress-nginx is being used to provide ingress, and uses the corresponding annotations to configure SSL passthrough. However it could be easily adapted to use other ingress controllers that allow SSL passthrough.
Warning
SSL passthrough is not enabled by default when deploying ingress-nginx
. To enable it when
using the Helm chart, use the following values:
controller:
extraArgs:
enable-ssl-passthrough: "true"
The host cluster must have the Flux CD controllers installed.
First, fork or copy this repository into a new repository.
To define a new cluster, just copy the example cluster and modify the config to suit your use case.
namespace.yaml
contains the definition for the namespace that the vCluster will use. Annotations
can be applied to the namespace, e.g. to determine which pod security standard will be used.
The namespace
in kustomization.yaml
must be updated to match the name of the namespace in
namespace.yaml
.
overrides.yaml
contains overrides for the vCluster configuration. The full range of possible
overrides can be found in the
vCluster docs.
In particular, the hostname for the ingress that is used for the API server is required. This hostname must resolve to the IP address for the ingress controller's load balancer.
The example cluster also includes configuration for authenticating using OpenID Connect (OIDC). In order to configure this, you must first create an OIDC client with the device flow enabled. This process will differ for each identity provider and is beyond the scope of this documentation.
Note
An OIDC client per vCluster is recommended.
The example cluster also includes configuration for binding the cluster-admin
role within the
vcluster to a group from the OIDC claims. Whether this is suitable for production depends entirely
on your use case.
To add the cluster to the Flux configuration, edit the root kustomization.yaml
to point to the
cluster directory:
resources:
- ./clusters/cluster1
- ./clusters/cluster2
This will need to be done for each new cluster that is added.
Configuring Flux to manage the vClusters defined in the repository is a one-time operation:
flux create source git vclusters --url=<giturl> --branch=main
flux create kustomization vclusters --source=GitRepository/vclusters --prune=true
This creates a Kustomization that
will deploy the root kustomization.yaml
from your repository, hence deploying all the vClusters
referenced in that file.
Assuming you have the ingress working, you can do:
vcluster list
kubectl get ingress -n <namespace>
vcluster connect vcluster --server <ingress> -n <namespace>
<use kubectl on vcluster here>
vcluster disconnect
This section assumes that the vCluster has been configured to use OIDC for authentication. Other mechanisms for accessing vClusters are described in the vCluster documentation.
To use OIDC to access a vCluster, the client must have the
oidc-login plugin for kubectl
installed.
First, you must obtain the base64-encoded certificate for the vCluster's API server, using a kubeconfig file that can access the host cluster:
kubectl -n my-vcluster get secret vcluster-certs -o go-template='{{index .data "ca.crt"}}'
Then create a KUBECONFIG
file similar to the following to access the vCluster, replacing the
server
with the ingress hostname for the API server and the OIDC issuer and client ID with
the values used when configuring the vCluster:
apiVersion: v1
clusters:
- cluster:
server: https://my-cluster.k8s.example.org:443
certificate-authority-data: <BASE64-ENCODED CERT DATA>
name: vcluster
contexts:
- context:
cluster: vcluster
user: oidc
name: oidc@vcluster
current-context: oidc@vcluster
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubectl
args:
- oidc-login
- get-token
- --grant-type=device-code
- --oidc-issuer-url=https://myidp.example.org
- --oidc-client-id=my-vcluster
This configuration will perform a device flow authentication with the issuer to get an OIDC token that can be used to interact with Kubernetes.