Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exportKubeConfig.server not working after 0.21.4 #2390

Open
jawsper opened this issue Jan 10, 2025 · 7 comments
Open

exportKubeConfig.server not working after 0.21.4 #2390

jawsper opened this issue Jan 10, 2025 · 7 comments
Labels

Comments

@jawsper
Copy link

jawsper commented Jan 10, 2025

What happened?

I created a vcluster using the helm chart version 0.22.4, where I set this in the values:

exportKubeConfig:
  server: https://env-${env}.env-${env}.svc.cluster.local

(${env} gets replaced by the postBuild of the flux kustomization)

But the generated config still has:

config:
  clusters:
  - cluster:
      server: https://localhost:8443

What did you expect to happen?

I expected the config to contain the configured server:

config:
  clusters:
  - cluster:
      server: https://env-${env}.env-${env}.svc.cluster.local

How can we reproduce it (as minimally and precisely as possible)?

Use helm chart 0.22.4 or later, and set the config above.

Anything else we need to know?

I downgraded to 0.21.4, and it worked in that version. Additionally, it does work to change other elements of the exported kubeconfig, specifically context worked for me.

Host cluster Kubernetes version

$ kubectl version
Client Version: v1.30.8-dispatcher
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.6-gke.1125000

vcluster version

$ vcluster --version
vcluster version 0.21.2

Though I never used the CLI since I did everything with the helm chart.

VCluster Config

controlPlane:
  distro:
    k3s:
      enabled: true
  proxy:
    extraSANs:
    - env-${env}.env-${env}.svc.cluster.local
  statefulSet:
    resources:
      limits:
        memory: 4Gi
    scheduling:
      podManagementPolicy: OrderedReady
    persistence:
      volumeClaim:
        size: 10Gi
sync:
  fromHost:
    ingressClasses:
      enabled: true
  toHost:
    ingresses:
      enabled: true
    serviceAccounts:
      enabled: true
exportKubeConfig:
  context: env-${env}
  server: https://env-${env}.env-${env}.svc.cluster.local

Where ${env} gets replaced with the specific environment I created.

@cbron
Copy link
Contributor

cbron commented Jan 10, 2025

To confirm, you don't have a secret on your exportKubeConfig setting ? If you did it would end up changing the other secret.
What is the name of the secret you are getting the config from ? And can you also check the logs for errors, its possible the {env} part is throwing it off.

@jawsper
Copy link
Author

jawsper commented Jan 12, 2025

The name of the secret is vc-{name-of-env}, which contains the kubeconfig in the config key.

I've made a minimal example where I've created an vcluster named env-demo with 0.21.4 and env-broken with 0.22.1, they have identical configuration otherwise:

apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: loft-sh
  namespace: flux-system
spec:
  interval: 30m
  url: https://charts.loft.sh
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: env-demo
  namespace: flux-system
spec:
  interval: 1m
  releaseName: env-demo
  targetNamespace: env-demo
  chart:
    spec:
      chart: vcluster
      version: "0.21.4"
      sourceRef:
        kind: HelmRepository
        name: loft-sh
  install:
    createNamespace: true
  values:
    exportKubeConfig:
      context: env-demo
      server: https://env-demo.env-demo.svc.cluster.local
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: env-broken
  namespace: flux-system
spec:
  interval: 1m
  releaseName: env-broken
  targetNamespace: env-broken
  chart:
    spec:
      chart: vcluster
      version: "0.22.1"
      sourceRef:
        kind: HelmRepository
        name: loft-sh
  install:
    createNamespace: true
  values:
    exportKubeConfig:
      context: env-broken
      server: https://env-broken.env-broken.svc.cluster.local

The way I've tested the minimal example is with minikube with flux installed with flux install with no other arguments, and then applying the above yaml with kubectl apply -f example.yaml

In the secret vc-env-demo (0.21.4) value for config.clusters[0].cluster.server is https://env-demo.env-demo.svc.cluster.local

In the secret vc-env-broken (0.22.1) value for the same key is https://localhost:8443

If you want, I can also try running helm directly, not using flux, but I'm not personally convinced it would have a different outcome, but I'd be willing to try.

@jawsper
Copy link
Author

jawsper commented Jan 12, 2025

I just tested with running the helm CLI directly on a clean cluster, and the results are the same, server gets set on 0.21.4, but not on 0.22.1

values.yaml:

exportKubeConfig:
  server: https://env-demo.env-demo.svc.cluster.local

Commands:

helm upgrade vcluster-good vcluster --install --repo https://charts.loft.sh --namespace vcluster-good --create-namespace --values values.yaml --version 0.21.4
helm upgrade vcluster-broken vcluster --install --repo https://charts.loft.sh --namespace vcluster-broken --create-namespace --values values.yaml --version 0.22.1 

@kale-amruta
Copy link
Contributor

@jawsper This looks like its by design , added in the PR - #2273

The default secret that gets created in vcluster namespace is not updated, you need to add a different secret name in the exportKubeConfig field where the kubeconfig would be exported with updated server name

eg:

exportKubeConfig:
  server: https://env-demo.env-demo.svc.cluster.local
  secret:
    name: vc-env-demo-new

@jawsper
Copy link
Author

jawsper commented Jan 13, 2025

Oh wow, that is exactly what I needed! This way it also prevents vcluster connect from being broken due to it not being able to find the server.

Sorry for all the noise here!

Would it be possible to add explicitly to the documentation that it's required to set a secret name if you want to adjust the server?

@kale-amruta
Copy link
Contributor

kale-amruta commented Jan 13, 2025

Yeah we can add it to the documentation.

@cbron
Copy link
Contributor

cbron commented Jan 15, 2025

Agree that the above is confusing. We have written up a wider solution that will allow for more flexibility and less confusion in the future. When that gets published, the docs will naturally get upgraded to reflect that system. In the short term if you'd like to make a PR to the docs to reflect the current situation, that would be great, you can assign me as a reviewer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants