Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm uninstall does not remove pods in microk8s #3665

Closed
l5 opened this issue Jan 13, 2023 · 6 comments
Closed

Helm uninstall does not remove pods in microk8s #3665

l5 opened this issue Jan 13, 2023 · 6 comments
Labels

Comments

@l5
Copy link

l5 commented Jan 13, 2023

I have tested this with microk8s and minikube. The issue occurred only in microk8s. I install a helm release in the cluster; the helm uninstall succeeds, but some resources initially created by helm install are not removed at all.

This is a simple, minimum viable configuration to reproduce the issue:

Versions (this is on Ubuntu 22.04):

$ microk8s version
MicroK8s v1.26.0 revision 4390

$ helm version
version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}

Creating namespace:

$ kubectl create namespace mydemo
namespace/mydemo created

Installing sample helm chart:

$ helm install happy-panda bitnami/wordpress -n mydemo
NAME: happy-panda
LAST DEPLOYED: Fri Jan 13 08:50:58 2023
NAMESPACE: mydemo
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: wordpress
CHART VERSION: 15.2.22
APP VERSION: 6.1.1

** Please be patient while the chart is being deployed **

Your WordPress site can be accessed through the following DNS name from within your cluster:

    happy-panda-wordpress.mydemo.svc.cluster.local (port 80)

To access your WordPress site from outside the cluster follow the steps below:

1. Get the WordPress URL by running these commands:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace mydemo -w happy-panda-wordpress'

   export SERVICE_IP=$(kubectl get svc --namespace mydemo happy-panda-wordpress --include "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
   echo "WordPress URL: http://$SERVICE_IP/"
   echo "WordPress Admin URL: http://$SERVICE_IP/admin"

2. Open a browser and access WordPress using the obtained URL.

3. Login with the following credentials below to see your blog:

  echo Username: user
  echo Password: $(kubectl get secret --namespace mydemo happy-panda-wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d)

Installation goes well; the helm release shows up and resources are available (get ready soon afterwards):

$ kubectl get all -n mydemo
NAME                                         READY   STATUS    RESTARTS   AGE
pod/happy-panda-wordpress-6756b48578-xqzwr   0/1     Running   0          23s
pod/happy-panda-mariadb-0                    0/1     Running   0          23s

NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/happy-panda-mariadb     ClusterIP      10.152.183.104   <none>        3306/TCP                     23s
service/happy-panda-wordpress   LoadBalancer   10.152.183.221   <pending>     80:31402/TCP,443:32116/TCP   23s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/happy-panda-wordpress   0/1     1            0           23s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/happy-panda-wordpress-6756b48578   1         1         0       23s

NAME                                   READY   AGE
statefulset.apps/happy-panda-mariadb   0/1     23s

Helm release is installed as expected:

$ helm list -n mydemo
NAME       	NAMESPACE	REVISION	UPDATED                                 	STATUS  	CHART            	APP VERSION
happy-panda	mydemo   	1       	2023-01-13 08:50:58.636327755 +1300 NZDT	deployed	wordpress-15.2.22	6.1.1      

Now, we try to uninstall the helm release:


$ helm uninstall happy-panda -n mydemo
release "happy-panda" uninstalled

The helm release is gone as expected:

$ helm list -n mydemo
NAME	NAMESPACE	REVISION	UPDATED	STATUS	CHART	APP VERSION

... unfortunately, some resources are still there:

$ kubectl get all -n mydemo
NAME                                         READY   STATUS    RESTARTS   AGE
pod/happy-panda-mariadb-0                    1/1     Running   0          2m17s
pod/happy-panda-wordpress-6756b48578-xqzwr   0/1     Running   0          2m17s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/happy-panda-wordpress-6756b48578   1         1         0       2m17s

I had this issue with microk8s version 1.25 and 1.26.

@neoaggelos
Copy link
Contributor

Hi @l5,

Unfortunately, I am unable to reproduce the issue. After removing the chart, all pods/deployments/etc go away.

Can you share an inspection report from the node? microk8s inspect can help you with this. Thanks!

@l5
Copy link
Author

l5 commented Jan 16, 2023

Hi @neoaggelos , thanks for checking! I have since then tried to restart, reset, etc., but the issue persists.

The inspection report is quite comprehensive and contains a lot of information about the system we wouldn't like to publish. I wonder, is there a way to share the inspection report in a slightly safer way?

It is actually interesting that the deployment is removed by helm uninstall, but the replicaset remains. I have further tried to manually remove the replicaset with microk8s kubectl delete replicaset happy-panda-wordpress-... -n mydemo; the replicaset was removed, but the pods stayed.

Describing the pod(s) showed that they are Controlled By: StatefulSet/happy-panda-mariadb / ReplicaSet/happy-panda-wordpress-6756b48578, but none of them exists:

$ microk8s kubectl describe pod happy-panda-mariadb-0 -n mydemo
Name:             happy-panda-mariadb-0
Namespace:        mydemo
Priority:         0
Service Account:  happy-panda-mariadb
Node:             x8u/192.168.0.103
Start Time:       Sat, 14 Jan 2023 16:24:52 +1300
Labels:           app.kubernetes.io/component=primary
                  app.kubernetes.io/instance=happy-panda
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mariadb
                  controller-revision-hash=happy-panda-mariadb-6f6dffc47d
                  helm.sh/chart=mariadb-11.4.2
                  statefulset.kubernetes.io/pod-name=happy-panda-mariadb-0
Annotations:      checksum/configuration: 7431f34442af5d1f3ee84e239e0e89358cba91fb71db3b097513c4540b913d14
                  cni.projectcalico.org/containerID: 44bcbea03fa94a2b457c9ad10ec86dcbb8ae5b2e9cfad1560e69dfed57aeb613
                  cni.projectcalico.org/podIP: 10.1.132.33/32
                  cni.projectcalico.org/podIPs: 10.1.132.33/32
Status:           Running
IP:               10.1.132.33
IPs:
  IP:           10.1.132.33
Controlled By:  StatefulSet/happy-panda-mariadb
Containers:
  mariadb:

$ microk8s kubectl describe pod happy-panda-wordpress-6756b48578-wvm4h -n mydemo
Name:             happy-panda-wordpress-6756b48578-wvm4h
Namespace:        mydemo
Priority:         0
Service Account:  default
Node:             x8u/192.168.0.103
Start Time:       Sat, 14 Jan 2023 16:24:57 +1300
Labels:           app.kubernetes.io/instance=happy-panda
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=wordpress
                  helm.sh/chart=wordpress-15.2.22
                  pod-template-hash=6756b48578
Annotations:      cni.projectcalico.org/containerID: 3c8f1c8951248e0bde756ee67c3bdbabee6f9b410905037acc61132d6201378f
                  cni.projectcalico.org/podIP: 10.1.132.42/32
                  cni.projectcalico.org/podIPs: 10.1.132.42/32
Status:           Running
IP:               10.1.132.42
IPs:
  IP:           10.1.132.42
Controlled By:  ReplicaSet/happy-panda-wordpress-6756b48578
Containers:
  wordpress:

... while literally only the two pods are shown when I try to show all elements with microk8s kubectl get all -n mydemo.

@neoaggelos
Copy link
Contributor

Would you mind sharing an inspection report from the cluster? Could be that the pod teardown is failing.

In case an inspection report is not possible, please share the logs from

sudo journalctl -u snap.microk8s.daemon-containerd

@djjudas21
Copy link

I have observed similar behaviour which seems to be related to dqlite. Have a look at my report in #3735 and see that attempting to scale Deployments doesn't change the number of pods. I also found that deleting Helm charts or otherwise deleting Deployments doesn't remove the pods.

@jorhett
Copy link

jorhett commented Aug 20, 2023

FWIW I'm seeing this same behavior on kubeadm cluster on pure k8s 1.27 with external etcd. Pods and Replicasets do not go away.

Copy link

stale bot commented Jul 16, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Jul 16, 2024
@stale stale bot closed this as completed Aug 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants