[HW Accel Support]: TensorRT improper library search: libcuda.so vs libcuda.so.1 #11566
Unanswered
JoshuaPK
asked this question in
Hardware Acceleration Support
Replies: 2 comments 2 replies
-
I get the same running using Usually, for other projects, I can use CUDA using:
I do have the CUDA toolkit but I didn't find a find like What confuses me is the Docker image to use.
|
Beta Was this translation helpful? Give feedback.
1 reply
-
Seem that the fix is mounting volumes:
- /usr/lib/libcuda.so:/usr/lib/libcuda.so:ro
- ... |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the problem you are having
When first starting up the Frigate container, the yolov7-320.trt model fails to build. It stops with an error: Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory This is due to the fact that there is no libcuda.so in the image. The provided Dockerfile fixes this issue (note the 'entrypoint' entry). I am not sure if this is a problem with the NVidia base image, the way the frigate image is built, or the fact I'm using podman and podman-compose instead of Docker.
Version
frigate:stable-tensorrt
Frigate config file
N/A at this stage
docker-compose file or Docker CLI command
Relevant log output
FFprobe output from your camera
Operating system
Other Linux
Install method
Docker Compose
Network connection
Wired
Camera make and model
N/A at this point
Any other information that may be helpful
The Cuda libraries on the host OS (Alma Linux 9) are 555.42.02 and the Cuda Tookit version is 12.5.40.
Beta Was this translation helpful? Give feedback.
All reactions