-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce overhead of using libraries #13
Comments
I dont know how rust does it but virtual dynamic dispatch to a shared/dynamic lib in C++ does a lookup on each call. Good JIT run times use polymorphic inline caches to avoid this at runtime but obviously that's not an option. Your only options are to ensure the calls happen rarely eg chunky calls/send multiple commands , change the library architecture or possibly static link the lib . |
@bklooste : I am also not too sure how Rust handles that, but I think that maybe LTO (Link time optimization) already does something like that. At least I haven't really seen any significant overhead for that. As for static linking the lib, that should generally be possible to do in the relevant plugins (cudnn in -nn and cublas in -blas should support static linking). What I was originally meant with dynamic dispatch was the one in Rust as explained in this part of the Rust book. |
Looking at +1000x Dot product of two vectors of size 100 | 48,870 ns (+/- 499) | 15,226 ns (+/- 244) The cost of dynamic dispatch is not typically huge , its just a static indexed lookup for the right method . However it can suffer
this.v1();
|
We currently still have a siginificant overhead when compared to directly calling a library implementation. As far as I can tell from profiling most of that overhead is due to dynamic dispatch, which in some cases might only be removed with some bigger restructuring of library.
Any input on where/how performance can be improved is highly appreciated! :)
The text was updated successfully, but these errors were encountered: