-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can cuBLAS be supported? #8
Comments
https://github.com/rust-cuda/cuda-sys I propose that:
I can contribute. How about it? |
Hi, thank you for the note! Yes, absolutely. It is very much encouraged to keep on extending the list of sources as long as they conform to the standard BLAS/LAPACK interface. Please take a look at the architecture: https://github.com/blas-lapack-rs/blas-lapack-rs.github.io/wiki#sources Can you please clarify why |
Sorry it was my misunderstanding. |
It’s been a while. I think the main concern is that, if the APIs are drastically different, it would not fit into the current architecture, which is to have a small number of sys crates interfacing with different sources allowing one to build on top. Everything that is specific to a particular implementation should probably have its own sys create, as we discuss here. |
It might be more realistic to support NVBLAS, a library that Nvidia built on top of cuBLAS to be a drop-in replacement for BLAS. It also has heuristics about whether the GPU acceleration is worth the time it'd take to transfer the input to GPU memory and the output back. |
Can cuBLAS be supported in this library?
If it is able, it is very useful.
The text was updated successfully, but these errors were encountered: