Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Access Denied" when using Hetzner Object storage #157

Open
knuurr opened this issue Feb 4, 2025 · 1 comment
Open

"Access Denied" when using Hetzner Object storage #157

knuurr opened this issue Feb 4, 2025 · 1 comment

Comments

@knuurr
Copy link

knuurr commented Feb 4, 2025

I am trying to configure CSI provider for use with Hetzner Object storage (S3-compatibile). This is first time I am doing such thing. Few days ago I successfully configured this provider with AWS S3 for testing. Now I cannot have same result with Hetzner.

Logs from csi-s3-provisioner-0 pod mention Access Denied
(bucket-name here is obviously placeholder)

I0204 14:32:39.720832 1 controllerserver.go:69] Got a request to create volume bucket-name/pvc-b590d3c8-d9ac-4c7f-a85f-b0a1d4301315  

E0204 14:32:39.928368 1 utils.go:101] GRPC error: failed to check if bucket bucket-name/pvc-b590d3c8-d9ac-4c7f-a85f-b0a1d4301315 exists: Access Denied.  

I0204 14:33:11.985837 1 utils.go:97] GRPC call: /csi.v1.Controller/CreateVolume  

I0204 14:33:11.985873 1 controllerserver.go:69] Got a request to create volume bucket-name/pvc-b590d3c8-d9ac-4c7f-a85f-b0a1d4301315  

E0204 14:33:12.158918 1 utils.go:101] GRPC error: failed to check if bucket bucket-name/pvc-b590d3c8-d9ac-4c7f-a85f-b0a1d4301315 exists: Access Denied.  

I0204 14:34:16.216043 1 utils.go:97] GRPC call: /csi.v1.Controller/CreateVolume  

I0204 14:34:16.216075 1 controllerserver.go:69] Got a request to create volume bucket-name/pvc-b590d3c8-d9ac-4c7f-a85f-b0a1d4301315  

E0204 14:34:16.345959 1 utils.go:101] GRPC error: failed to check if bucket bucket-name/pvc-b590d3c8-d9ac-4c7f-a85f-b0a1d4301315 exists: Access Denied.

This doesn't make much sense. I tried to manually put something into this object storage, using mc command line tool, wth same access keys we use for CSI provider. Result with mc was succesful:

% mc cp kek.txt hetzner-s3/bucket-name/kek.txt
/Users/qbus/kek.txt:                 0 B / ?  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░▓

mc ls hetzner-s3/bucket-name
[2025-02-04 15:50:18 CET]     0B STANDARD kek.txt

On deployment:

For now we provision Secret manually:

kubectl create secret generic s3-hetzner-secret \                                          
  --namespace kube-system \
  --from-literal=accessKey="xxxxxx" \
  --from-literal=secretKey="xxxxx" \
  --from-literal=endpoint="https://fsn1.your-objectstorage.com"

We also apply Helm chart using Argo CD:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: s3-csi-driver
  namespace: argocd
spec:
   // ...
  template:
    metadata:
      name: '{{name}}-s3-csi-driver'
    spec:
      project: default
      source:
        chart: csi-s3
        repoURL: 'https://yandex-cloud.github.io/k8s-csi-s3/charts'
        targetRevision: 0.42.1
        helm:
          values: |
            # Default values mirror from https://github.com/yandex-cloud/k8s-csi-s3/blob/master/deploy/helm/csi-s3/values.yaml
            storageClass:
              # Specifies whether the storage class should be created
              create: true
              # Name
              name: s3-hetzner
              # Use a single bucket for all dynamically provisioned persistent volumes
              singleBucket: "bucket-name"
              # mounter to use - either geesefs, s3fs or rclone (default geesefs)
              mounter: geesefs
              # GeeseFS mount options
              mountOptions: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
              # Volume reclaim policy
              reclaimPolicy: Delete
              # Annotations for the storage class
              # Example:
              # annotations:
              #   storageclass.kubernetes.io/is-default-class: "true"
              annotations: {}

            secret:
              # Specifies whether the secret should be created
              create: false
              # Name of the secret
              name: s3-hetzner-secret
              # S3 Access Key
              accessKey: ""
              # S3 Secret Key
              secretKey: ""
              # Endpoint
              endpoint: https://storage.yandexcloud.net
              # Region
              region: ""
      destination:
        server: '{{server}}'
        namespace: kube-system
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true 

For tests I used examples from repo, adapted to our context:

# Dynamically provisioned PVC:
# A bucket or path inside bucket will be created automatically
# for the PV and removed when the PV will be removed
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hetzner-s3-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: s3-hetzner
apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /usr/share/nginx/html/s3
         name: webroot
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: csi-s3-manual-pvc
       readOnly: false

However Pod never gets provisioned succesfully.

What is missing here? What am I doing wrong?

@knuurr
Copy link
Author

knuurr commented Feb 5, 2025

Update: So when I provision SC and Secret using Helm chart, then I can successfully provision storage...
()

# snip ....
        helm:
          values: |
            storageClass:
              # Specifies whether the storage class should be created
              create: true
              # Name
              name: s3-hetzner-sc
              # Use a single bucket for all dynamically provisioned persistent volumes
              singleBucket: "<bucket-name>"
            secret:
              create: true
              name: s3-hetzner-secret
              accessKey: "xxxxxxxxx"
              secretKey: "yyyyyyyyyy"
              endpoint: "https://xxx.your-objectstorage.com"
              # region: us-east-2

PVC and Pod templates used:

# Dynamically provisioned PVC:
# A bucket or path inside bucket will be created automatically
# for the PV and removed when the PV will be removed
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hetzner-s3-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: s3-hetzner-sc
apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: default
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /usr/share/nginx/html/s3
         name: webroot
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: hetzner-s3-pvc
       readOnly: false

Really, what am I missing here? what is different?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant