You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to use the ghcr.io/ggml-org/llama.cpp:server-vulkan Docker image on my Mac with Asahi Linux (Apple M1), but I'm having issues getting the GPU to work. I'm not very familiar with GPU or Vulkan configuration, so any help would be appreciated.
What I'm trying to achieve:
I want to run llama.cpp:server-vulkan using my Apple M1 GPU on Asahi Linux.
Running the container with --privileged and mounting /dev/dri.
Problem:
When I run the container, I get the following logs:
warning: no usable GPU found, --gpu-layers option will be ignored
warning: one possible reason is that llama.cpp was compiled without GPU support
warning: consult docs/build.md for compilation instructions
I suspect it might be related to:
Incompatibility with the Apple M1 GPU.
Misconfiguration of Vulkan or the Docker image.
Additional Information:
I am using Asahi Linux on an Apple M1 Mac.
The GPU is detected as Apple M1 (G13G B1) by vulkaninfo.
I'm not very familiar with Vulkan, GPU configuration, or Docker GPU passthrough.
What I need help with:
Is the llama.cpp:server-vulkan image compatible with the Apple M1 GPU?
Do I need to compile the image myself with special settings?
Is there any additional configuration needed for Vulkan on Asahi Linux?
Any tips on debugging GPU passthrough in Docker with Vulkan?
Any guidance or tips would be greatly appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Description
Hello,
I'm trying to use the ghcr.io/ggml-org/llama.cpp:server-vulkan Docker image on my Mac with Asahi Linux (Apple M1), but I'm having issues getting the GPU to work. I'm not very familiar with GPU or Vulkan configuration, so any help would be appreciated.
What I'm trying to achieve:
I want to run llama.cpp:server-vulkan using my Apple M1 GPU on Asahi Linux.
Here's my simplified Deployment example:
What I've tried:
Problem:
When I run the container, I get the following logs:
I suspect it might be related to:
Additional Information:
What I need help with:
Any guidance or tips would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions