Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test helpers to add to project #93

Closed
mehaase opened this issue Mar 16, 2020 · 2 comments
Closed

Test helpers to add to project #93

mehaase opened this issue Mar 16, 2020 · 2 comments

Comments

@mehaase
Copy link

mehaase commented Mar 16, 2020

There are a few helpers I've written I copy into each new Trio project. Would any of these be good batteries to include in this project?

  • An AsyncMock class for mocking coroutines. I forget that the built-in Mock cannot be awaited at least once on every new project. Here's an example implementation.
  • A fail_after() decorator that limits the time (relative to Trio's clock) that each test can run. With concurrency, I find that its easy to write code (and/or tests) that deadlock, and if you ctrl+c a pytest run you lose a lot of valuable info. Or if you run a deadlocked test on a CI platform like Travis, you'll time out the entire build. Here's an example implementation. I realize that this would step on Add test timeout support #53's toes.
  • Or as an alternative to fail_after(), I've also written min/max execution time decorators. See examples here. The goal for these isn't necessarily preventing runaway tests, but rather to make assertions regarding timing. For example if you've written something that's supposed to load balance connections, you can use these assertions to ensure that it doesn't dispatch them too quickly or too slowly.

I'm happy to contribute ideas or source code. The 3 decorators mentioned above might also work better as context managers.

@njsmith
Copy link
Member

njsmith commented Mar 19, 2020

An AsyncMock class for mocking coroutines. I forget that the built-in Mock cannot be awaited at least once on every new project. Here's an example implementation.

How does this compare to the AsyncMock added to the stdlib in 3.8? Is it essentially just a backport for earlier versions?

A fail_after() decorator [...] might also work better as context managers.

So that would just be trio.fail_after, then...?

I do think #53 would be a good and fairly straightforward thing to implement, that would be strictly better for your use cases, right?

min/max execution time

Huh, interesting. I agree that making these context managers might make more sense, similar to pytest's with assert_raises(...), and they seem like inoffensive standalone utility functions. I am having some trouble imagining use cases for them, though, so I'm not sure if that means they're really niche, or that my imagination is just bad :-).

@mehaase
Copy link
Author

mehaase commented Mar 25, 2020

How does this compare to the AsyncMock added to the stdlib in 3.8? Is it essentially just a backport for earlier versions?

Whoops, I didn't realize that 3.8 has an AsyncMock. This isn't really a backport because I didn't put in the effort to completely replicate the API.

So that would just be trio.fail_after, then...?

I do think #53 would be a good and fairly straightforward thing to implement, that would be strictly better for your use cases, right?

It's fail_after() except it fails the test instead of error-ing the test. But #53 would be a great alternative.

@mehaase mehaase closed this as completed Mar 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants