Benchmarks Keep You Honest (v0.8.2)
This release adds a benchmarking library, along with an additional executable, JLUNA_BENCHMARK
, which compares various performance-critical features by implementing them purely in C in the most optimal way possible*, then comparing the same functionality implemented using jluna to that.
Code for the benchmark is available here. It can of course be run by any end-user.
The console output of the benchmark executable includes human-readable results. A copy of the latest benchmarking results (done on a not very powerful laptop), generated whenever a new release is drafted, will be available here. A legacy run that will not be updated and will serve as a snapshot of the current state of jlunas performance, will be attached to this post.
I will be drafting a blog post talking through the results one-by-one. It will be available on my website. As of March 13th, it has not yet been completed.
Thank you all,
C.
* which actually means "to the best of my abilities". If someone finds a faster way to do any given benchmark code, please either message me or open a PR, I'd be very interested in that.