Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running inference in C++:onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. #22836

Open
pcycccccc opened this issue Nov 14, 2024 · 0 comments
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@pcycccccc
Copy link

Hello, I encountered the following error message while loading a model using the GPU in C++:

onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device.

My GPU is an NVIDIA GeForce GT 730 with a compute capability of 3.5. The driver version I downloaded is 475.14, CUDA is 11.4, cuDNN is 8.2.2, OpenCV-CUDA version is 4.7.0, and ONNXRuntime is 1.12.0 (I have also tested with 1.11.0). During testing, the model can be loaded and inferred correctly on the CPU, but I get an error when using the GPU. I have checked the versions of the dynamic libraries, and I haven't found any compatibility issues; they should all be compatible. However, given the error above, I'm not sure how to resolve it. Does anyone have any suggestions

model :yolov8n.onnx

@fajin-corp fajin-corp added the ep:CUDA issues related to the CUDA execution provider label Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

2 participants