-
-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] CI for running Benchmarks #324
Comments
I think this is a good improvement. I would suggest that the action run once on the |
I think that makes sense! We can have an initial run when PR is opened, and then ad-hoc runs through comments. The reporting comment can be sticky, so there is not too much noise, and as you suggest it can also mention the way to re-run and update. I see that the bench in makefile o/p gives mean, median and stdev ; so should we report these raw and diff from master or just the diff? Also should we also report the values for cmark-gfm to compare? |
I think raw + diff makes sense here. If it's helpful, here's an example of running the complete program (though it sounds like you know where to look already :)).
I think cmark-gfm plus the two libraries mentioned in the README ( |
I think such a change is a really good idea! Very much welcome. My main concern is about measurement noise from the execution environment? The benchmark script uses the user-space time exclusively (and not real/wall-clock), which should exclude variance due to vCPU scheduling etc. in VMs like those used by Actions, so it's likely fine.
This is very neat! |
Great! I'll try to get a PR up this weekend. Also what are your thoughts on owning the benchmarking script, instead of using that of cmark-gfm? We will still be using same approach of using the pro-git md files, but instead of just This can be split into a separate PR, so we can just do the CI update first and then think on changing the bench script, but wanted to take comments on this as well. |
I think this is a great idea, I've had something similar in the back of head for awhile. |
This is completely fine, and I don't think you need to split up the PRs unless you really want to; there's nothing very special about their method, and if we show comparisons against |
I proposed splitting in case you wanted to have that evaluated separately (as they are two changes - adding ci and changing bench). Anyways, I'll do both in same PR then 👍 To be clear, I'm proposing following changes for benches :
|
Sounds perfect. 👍 Thank you! 🤍 |
Given that there are quite a few enhancements requested (#291 and linked in its desc), there will be code changes that potentially change the processing path or structures to implement them. As this is a parsing library and has speed as a concern, there should be a way to track change in processing time along with commits.
Currently I think there are benchmarks in makefile, so we should probably allow running them on github itself through CI. One way can be to run on each push, other would be to run through comment.
Would a PR on this welcome? Any suggestions on how to do benchmarks apart from the makefile and/or approach?
Thanks!
The text was updated successfully, but these errors were encountered: