Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MountVolume.MountDevice failed for volume / Timeout waiting for mount #144

Open
wccropper opened this issue Oct 21, 2024 · 9 comments
Open

Comments

@wccropper
Copy link

wccropper commented Oct 21, 2024

I am trying to use persistent manual bucket and share it across pods:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: bucket-csi-s3
provisioner: ru.yandex.s3.csi
parameters:
  mounter: geesefs
  options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
  bucket: csi-s3-1
  endpoint: https://<company.internal.s3.endpoint>    # actual SSL cert, not self-signed
  csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
  csi.storage.k8s.io/provisioner-secret-namespace: csi-s3
  csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: csi-s3
  csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-stage-secret-namespace: csi-s3
  csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-publish-secret-namespace: csi-s3
reclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolume
metadata:
  name: manualbucket-with-path
spec:
  storageClassName: bucket-csi-s3
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  claimRef:
    namespace: csi-s3
    name: bucket-csi-s3-manual-pvc
  csi:
    driver: ru.yandex.s3.csi
    controllerPublishSecretRef:
      name: csi-s3-secret
      namespace: csi-s3
    nodePublishSecretRef:
      name: csi-s3-secret
      namespace: csi-s3
    nodeStageSecretRef:
      name: csi-s3-secret
      namespace: csi-s3
    volumeAttributes:
      capacity: 10Gi
      mounter: geesefs
      options: --memory-limit 1000 --dir-mode 0777 --file-mode 0666
    volumeHandle: manualbucket/path
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: bucket-csi-s3-manual-pvc
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
apiVersion: v1
kind: Pod
metadata:
  name: csi-s3-test-nginx
  namespace: csi-s3
spec:
  containers:
   - name: csi-s3-test-nginx
     image: nginx
     volumeMounts:
       - mountPath: /usr/share/nginx/html/s3
         name: webroot
  volumes:
   - name: webroot
     persistentVolumeClaim:
       claimName: bucket-csi-s3-manual-pvc
       readOnly: false

The STORAGECLASS/PVC/PV all create correctly, but the POD gets error "MountVolume.MountDevice failed for volume "manualbucket-with-path" : rpc error: code = Unknown desc = Timeout waiting for mount" and never completes.

@SeanHai
Copy link

SeanHai commented Nov 4, 2024

I also encountered the same problem, anyone can help?

@xincan
Copy link

xincan commented Nov 12, 2024

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: bucket-csi-s3-manual-pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi

storageClassName: "" 改为 storageClassName: "bucket-csi-s3"

@justbelka
Copy link

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: bucket-csi-s3-manual-pvc spec: storageClassName: "" accessModes: - ReadWriteMany resources: requests: storage: 10Gi

storageClassName: "" 改为 storageClassName: "bucket-csi-s3"

I have the same error too. I changed it like you said, but it didn't work. Can you explain in more detail?
我也有同样的错误。 我按你说的改了,但没用。 你能更详细地解释一下吗?

@rhallier
Copy link

Hi,
I got the same issue whereas it worked correctly with a previous version of the chart ...

@vadimkim
Copy link

Same issue for me. I am also using pre-defined bucket name. POD is "creating" and never finishes.

@dzmitryastrouski
Copy link

Did you guys create Bucket first? Create Bucket manually or using the IaC approach. Only then will the tandem PVC -> PV -> Bucket work.

@scobit
Copy link

scobit commented Feb 6, 2025

Any updates? I have the same issue.

@vadimkim
Copy link

vadimkim commented Feb 6, 2025

@scobit I managed to solve the issue. The problem in my case was related to geesefs driver missing. I am using OKD + CoreOS on worker nodes and this OS has FUSE support, but no geesefs binary. You have to install it manually. In my case it is just downloading geesefs binary from Release page and put into /usr/local/bin. Then create symlink to it with name "mount.geesefs"
Keep in mind that default SELinux policy, if Enforced, might be blocking mount as well. Since it was a development cluster I switched off SELinux. For production system you need to write a policy.

@scobit
Copy link

scobit commented Feb 6, 2025

@vadimkim Thank you, the problem was in SELinux .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants