Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dynamic name tag to volumes created via CSI driver #996

Closed
devinnasar opened this issue Jul 27, 2021 · 8 comments
Closed

Add dynamic name tag to volumes created via CSI driver #996

devinnasar opened this issue Jul 27, 2021 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@devinnasar
Copy link

Is your feature request related to a problem?/Why is this needed

One thing I notice is that the in-tree driver seems to create a Name tag for EBS volumes equal to 'kubernetes-dynamic-'. I'm not seeing a means of being able to get a dynamic Name tag into the application at present.

Describe the solution you'd like in detail
If the driver created a Name tag equal to 'kubernetes-dynamic-', with an override that could be provided via controller.additionaArgs in the hel chart

Describe alternatives you've considered
As far as I know, trying to pass controller.extraVolumeTags.Name will cause the same Name tag to be created on all EBS Volumes created by the driver.

Additional context
Screenshot: example_name_tag

@AndyXiangLi
Copy link
Contributor

We are aware of this the tag issue. Can you take a look at #180 see if this address your issue?

@kahirokunn
Copy link
Member

From these comments it looks like there is no progress, but is there actually any progress?
#180 (comment)

@buddhdev-harsh
Copy link

buddhdev-harsh commented Sep 15, 2021

well i have very bad fix for this issue:
every time you make new volume you'll redeploy ebs-csi driver with extra-key tags suggested in README.md. this extra-key tag will be Name and value will be provided by config-map that will also be made directly.

i was using it with helm charts and i also had issue with naming EBS volumes in aws so i came with this solution.

Script that will deploy EBS-csi driver along with config-map and helm chart

#!/bin/bash

if [[ -z "$1" ]]
    then
        echo -e "error: \033[31;7m Please Give name for Deployment.. example : EBS_name.sh <chartname> <path_for_chart>\e[0m";
    exit
fi

if [[ -z "$2" ]]
    then
        echo -e "error: \033[31;7m Please Give path for chart.. example : EBS_name.sh $1 <path_for_chart>\e[0m";
    exit
fi

kubectl delete -f controller.yaml -n kube-system
kubectl delete configmap name-config -n kube-system
kubectl create configmap name-config --from-literal=SPECIAL_NAME_KEY=$1 -n kube-system
kubectl apply -f controller.yaml -n kube-system
helm install $1 $2 -n $1 --create-namespace

for this i had controller.yaml file in the same location as this script
which i use to deploy ebs-csi driver with configmap to extract that value and use it to tag valume with Name Tag.

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ebs-csi-controller
  labels:
    app.kubernetes.io/name: aws-ebs-csi-driver
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ebs-csi-controller
      app.kubernetes.io/name: aws-ebs-csi-driver
  template:
    metadata:
      labels:
        app: ebs-csi-controller
        app.kubernetes.io/name: aws-ebs-csi-driver
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ebs-csi-controller-sa
      priorityClassName: system-cluster-critical
      tolerations:
        - key: CriticalAddonsOnly
          operator: Exists
        - operator: Exists
          effect: NoExecute
          tolerationSeconds: 300
      containers:
        - name: ebs-plugin
          image: k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.2.0
          imagePullPolicy: IfNotPresent
          args:
            # - {all,controller,node} # specify the driver mode
            - --endpoint=$(CSI_ENDPOINT)
            - --logtostderr
            - --v=2
#######################change i made for extract value from configmap #############
            - --extra-tags=Name=$(SPECIAL_NAME_KEY)
 ##################################
          env:
            - name: CSI_ENDPOINT
              value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
            - name: CSI_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: aws-secret
                  key: key_id
                  optional: true
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: aws-secret
                  key: access_key
                  optional: true
#########################################change i made to use configmap for Tag value####################
            - name: SPECIAL_NAME_KEY
              valueFrom: 
                configMapKeyRef:
                  name: name-config
                  key: SPECIAL_NAME_KEY
###################################################
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
          ports:
            - name: healthz
              containerPort: 9808
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: healthz
            initialDelaySeconds: 10
            timeoutSeconds: 3
            periodSeconds: 10
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: /healthz
              port: healthz
            initialDelaySeconds: 10
            timeoutSeconds: 3
            periodSeconds: 10
            failureThreshold: 5
        - name: csi-provisioner
          image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1
          args:
            - --csi-address=$(ADDRESS)
            - --v=2
            - --feature-gates=Topology=true
            - --extra-create-metadata
            - --leader-election=true
            - --default-fstype=ext4
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: csi-attacher
          image: k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
          args:
            - --csi-address=$(ADDRESS)
            - --v=2
            - --leader-election=true
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: csi-snapshotter
          image: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3
          args:
            - --csi-address=$(ADDRESS)
            - --leader-election=true
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: csi-resizer
          image: k8s.gcr.io/sig-storage/csi-resizer:v1.0.0
          imagePullPolicy: Always
          args:
            - --csi-address=$(ADDRESS)
            - --v=2
          env:
            - name: ADDRESS
              value: /var/lib/csi/sockets/pluginproxy/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /var/lib/csi/sockets/pluginproxy/
        - name: liveness-probe
          image: k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
          args:
            - --csi-address=/csi/csi.sock
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
      volumes:
        - name: socket-dir
          emptyDir: {}

And it works best.

@tbondarchuk
Copy link

@devinnasar I've just found out that Name tag is created if driver's chart is deployed with controller.k8sTagClusterId=CLUSTERNAME.

tags without k8sTagClusterId:

kubernetes.io/created-for/pv/name | pvc-196694cc-3e6c-4bde-bedf-3709ee981da3
kubernetes.io/created-for/pvc/name | ebs-claim
kubernetes.io/created-for/pvc/namespace | default
ebs.csi.aws.com/cluster | true
CSIVolumeName | pvc-196694cc-3e6c-4bde-bedf-3709ee981da3

tags with k8sTagClusterId configured:

kubernetes.io/created-for/pv/name | pvc-fb58e538-da47-4524-a58c-6a94de4bd252
kubernetes.io/cluster/sandbox | owned
kubernetes.io/created-for/pvc/namespace | default
ebs.csi.aws.com/cluster | true
CSIVolumeName | pvc-fb58e538-da47-4524-a58c-6a94de4bd252
KubernetesCluster | sandbox
kubernetes.io/created-for/pvc/name | ebs-claim
Name | sandbox-dynamic-pvc-fb58e538-da47-4524-a58c-6a94de4bd252

Tested on chart version 2.4.1

P.S. chart's values.yaml comment says ID of the Kubernetes cluster used for tagging provisioned EBS volumes (optional). which is rather vague. I believe it would be much clearer to have it mentioned Name and KubernetesCluster tags explicitly

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants