-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Computing statistics on generated test values #30
Comments
To be honest, I've never actually used such reporting functionality, so I do not understand this use case fully. Your code seems fine to me -- although func TestStackRapid(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
....
stats.Event(fmt.Sprintf("some event"))
...
}
stats.PrintStats(t)
} As for implementing reporting inside rapid itself, right now I am not a fan: I think this is quite narrow use case because we should not show such things by default (I believe tests should be silent by default), and when it is hidden behind an option, not many people will benefit from it. |
Thanks, to do the printing after calling Check is indeed a good idea. The use case for such reporting is important when you define your own generators. How do you ensure that you generate good test data? You might do some test coverage to find out if your relevant code paths are selected. But what do you do if this is not the case? In that case you need to understand what your generator is producing, not only a few samples, but in your tests. In such situations, a reporting tool comes quite handy and is standard feature of QuickCheck from its inception in Haskell and its commercial version in Erlang. But also other implementations provide that functionality, e.g. Hypothesis (Python), PropEr (Erlang, Elixir), ScalaCheck (Scala). So, reporting is a kind of debugging tool for test developers. |
Thanks for the explanation! Do you have in mind how it should look in rapid? Something close to Hypothesis? |
Since we are close to Hypothesis and do not have the FP restrictions (and abilities), I would suggest to model it generally after Hypothesis. In my simple implementation, I added a I can provide a PR for this. |
Sounds interesting, let's see the PR (can't promise fast review right now, unfortunately). |
In Hypothesis and other QuickCheck-like implementations, it is possible to calculate statistics, usually to validate that test data generation works as expected or is skewed somehow, as described here: https://hypothesis.readthedocs.io/en/latest/details.html#test-statistics
While it is easy to implement something like the
event()
function of Hypothesis for rapid, generating reports is not. There are no means for decorating a property (that I am aware of) and calling aPrintStats()
function at the end of the property will be run every time (i.e. 100 times for a single property). What seems to work is the following:Is that a hack or an intended way of decorating a property? And: are you interested in an implementation for such statistics?
The text was updated successfully, but these errors were encountered: