-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for multiple NVIDIA GPUs #901
Comments
Hi @chris-gputrader, thanks for reporting. What distro and kernel are you on?
Also, can you provide the output of the sysbox-mgr (
I want to see if Thanks! |
I had the same issue.
thanks . |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm encountering an issue when attempting to use the sysbox runtime with containers that require NVIDIA GPU access on a system with multiple GPUs. While the setup works seamlessly on a single-GPU machine, it fails when deployed on a multiple GPU machine.
The container should have access to all or specific GPUs as defined in the Docker Compose file, with GPU devices and drivers properly passed through by the sysbox runtime.
When deploying a container on the multi-GPU system, the following error occurs:
Failed to deploy a stack: compose up operation failed: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: container_linux.go:439: starting container process caused: process_linux.go:608: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: mount error: mount operation failed: /var/lib/docker/overlay2/e5409caee5c762014641d9a3fa7981fc960b3c2309980dda0e6b5d87b096a649/merged/proc/driver/nvidia: no such file or directory: unknown
The text was updated successfully, but these errors were encountered: