-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modernization, CI Enhancements & Benchmark Improvements #6
Conversation
Benchmarks in my computer, which mean nothing really, due to the tool changes and the computer changed: BenchmarksBenchmarking aldous_broder/generate_10_x_10: Collecting 100 samples in estimatedaldous_broder/generate_10_x_10 time: [32.052 µs 32.119 µs 32.192 µs] Found 8 outliers among 100 measurements (8.00%) 2 (2.00%) low severe 2 (2.00%) high mild 4 (4.00%) high severeBenchmarking aldous_broder/generate_100_x_100: Collecting 100 samples in estimataldous_broder/generate_100_x_100 Benchmarking binary_tree/generate_10_x_10: Collecting 100 samples in estimated 5binary_tree/generate_10_x_10 Benchmarking binary_tree/generate_100_x_100: Collecting 100 samples in estimatedbinary_tree/generate_100_x_100 Benchmarking eller//generate_10_x_10: Collecting 100 samples in estimated 5.0287eller//generate_10_x_10 time: [16.266 µs 16.286 µs 16.308 µs] Benchmarking eller//generate_100_x_100: Warming up for 3.0000 s Benchmarking growing_tree_method_random/generate_10_x_10: Warming up for 3.0000 Benchmarking growing_tree_method_random/generate_10_x_10: Collecting 100 samplesgrowing_tree_method_random/generate_10_x_10 Benchmarking growing_tree_method_random/generate_100_x_100: Warming up for 3.0000 s Benchmarking growing_tree_method_oldest/generate_10_x_10: Warming up for 3.0000 Benchmarking growing_tree_method_oldest/generate_10_x_10: Collecting 100 samplesgrowing_tree_method_oldest/generate_10_x_10 Benchmarking growing_tree_method_oldest/generate_100_x_100: Warming up for 3.000Benchmarking growing_tree_method_oldest/generate_100_x_100: Collecting 100 samplgrowing_tree_method_oldest/generate_100_x_100 Benchmarking growing_tree_method_newest/generate_10_x_10: Warming up for 3.0000 Benchmarking growing_tree_method_newest/generate_10_x_10: Collecting 100 samplesgrowing_tree_method_newest/generate_10_x_10 Benchmarking growing_tree_method_newest/generate_100_x_100: Warming up for 3.000Benchmarking growing_tree_method_newest/generate_100_x_100: Collecting 100 samplgrowing_tree_method_newest/generate_100_x_100 Benchmarking growing_tree_method_middle/generate_10_x_10: Warming up for 3.0000 Benchmarking growing_tree_method_middle/generate_10_x_10: Collecting 100 samplesgrowing_tree_method_middle/generate_10_x_10 Benchmarking growing_tree_method_middle/generate_100_x_100: Warming up for 3.000Benchmarking growing_tree_method_middle/generate_100_x_100: Collecting 100 samplgrowing_tree_method_middle/generate_100_x_100 Benchmarking hunt_and_kill/generate_10_x_10: Collecting 100 samples in estimatedhunt_and_kill/generate_10_x_10 Benchmarking hunt_and_kill/generate_100_x_100: Collecting 100 samples in estimathunt_and_kill/generate_100_x_100 Benchmarking kruskal/generate_10_x_10: Collecting 100 samples in estimated 5.004kruskal/generate_10_x_10 Benchmarking kruskal/generate_100_x_100: Collecting 100 samples in estimated 7.2kruskal/generate_100_x_100 Benchmarking prim/generate_10_x_10: Collecting 100 samples in estimated 5.0085 sprim/generate_10_x_10 time: [7.6122 µs 7.7627 µs 8.0397 µs] Benchmarking prim/generate_100_x_100: Collecting 100 samples in estimated 5.1459prim/generate_100_x_100 time: [2.0352 ms 2.0444 ms 2.0538 ms] Benchmarking recursive_backtracking/generate_10_x_10: Collecting 100 samples in recursive_backtracking/generate_10_x_10 Benchmarking recursive_backtracking/generate_100_x_100: Collecting 100 samples irecursive_backtracking/generate_100_x_100 Benchmarking recursive_division/generate_10_x_10: Collecting 100 samples in estirecursive_division/generate_10_x_10 Benchmarking recursive_division/generate_100_x_100: Collecting 100 samples in esrecursive_division/generate_100_x_100 Benchmarking sidewinder/generate_10_x_10: Collecting 100 samples in estimated 5.sidewinder/generate_10_x_10 Benchmarking sidewinder/generate_100_x_100: Collecting 100 samples in estimated sidewinder/generate_100_x_100
Gnuplot not found, using plotters backend Benchmarking game_map/format_100_x_100: Collecting 100 samples in estimated 5.03game_map/format_100_x_100 Benchmarking ascii_broad/format_10_x_10: Collecting 100 samples in estimated 5.4ascii_broad/format_10_x_10 Benchmarking ascii_broad/format_100_x_100: Warming up for 3.0000 s Benchmarking ascii_narrow/format_10_x_10: Collecting 100 samples in estimated 5.ascii_narrow/format_10_x_10 Benchmarking ascii_narrow/format_100_x_100: Warming up for 3.0000 s Benchmarking image/format_10_x_10: Collecting 100 samples in estimated 5.2780 s image/format_10_x_10 time: [7.5186 ms 7.5650 ms 7.6212 ms] Benchmarking image/format_50_x_50: Warming up for 3.0000 s |
hey @naomijub. really appreciate you acknowledging my work. thank you! I've noticed some inconsistencies and would like to clarify a few things before merging. I'll leave comments in the review. In the meantime, could you update the commit messages to follow the Conventional Commits spec? Also, please separate the benchmark updates from the rest of the PR into a separate one—it should make the review process much smoother. |
Benchmarks dont compile without nightly and I'm not open to add nightly for this. But I can do the MR blindly if you prefer. Also, may I suggest adding a CONTRIBUTING.md to include the Conventional Commits spec? as convention for commits. Similar to other libraries like https://github.com/nats-io/nats.rs/blob/main/CONTRIBUTING.md |
@unrenamed commit messages addressed |
I might be missing smth, but how did you get the bench results before then?
sounds great 👍 Feel free to add one in this PR. |
For consistency, may I ask you apply these changes as well. You may also want to reflect this in
Example: ci: increase the CI usage
- add ...
- add ...
- make... ![]() |
I didnt, that is why I mentioned they were not really useful haha |
you also might want to approve CI workflow for third parties Learn more about approving workflows. |
yeah, it's a good idea. thanks for the link. as for your fixes, everything looks good. Validating benches and workflow runs will be the final steps. One thing though: please review your commit messages again — titles look better now, but the bodies still violate the convention. p.s. and fix typos like this one: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've run cargo bench
on my machine several times and noticed some warnings I'm concerned about. I can see those in the logs you've shared as well:
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.6s, enable flat sampling, or reduce sample count to 50.
tbh I'm not OK with leaving them since those benches seem to be useless now and give no valuable info for users.
as far as I'm concerned criterion.rs
provides some advanced configuration which might help fix the benches that produce warnings. WDYT?
Sorry, I'm on Vacation for the next days and won't have access to my computer. I'll take a look at that when I'm back. Those warnings mostly mean that the bench tool was not able to run all the predefined benchmark executions in time. It usually executes 100 in x seconds and the maze generation takes longer than that. This seems completely reasonable considering the natura of a maze generation algorithm |
I fully agree with you. Moreover, the algorithms' performance can vary due to random distribution, which Right now, the warnings indicate some benchmarks are omitted. We should adjust those taking longer by reducing the sample count or increasing execution time to ensure they complete and produce results without warnings. P.S. Enjoy your vacation! |
@naomijub I'd be happy to merge this PR as it is so I can proceed with my other PRs and make some minor adjustments to the codebase. However, since some checks are currently failing—particularly due to a typo—I’m unable to do so at the moment. Let me know if you're still available to fix them. Also, please use lowercase for this commit message: |
I will be back Sunday, then I'm happy to make the adjustments as this MR was done with another user that I don't have in this machine |
Hey @unrenamed I managed to fix the commits without the credentials for that computer. Hope everything works now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
In my bevy branch I have an indexing and iterating solution. Do you want me upstream it? |
This are changes from my own usage of Knossos, I will upstream them in steps so we can discuss the API as I suggest changed from my usages. I want to say congrats on the work, you maze library seems to be the best thought between the ones I analyzed.
I made quite a few changes that dont break the API:
cargo update
.rng.gen::<bool>()
was replaced byrng.random_bool(0.5)
. Named0.5
asBOOL_TRUE_PROBABILITY
, which is the probability to get a true, 50%.