You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While the mergebot doesn't seem likely to require more manpower, and the end of modern civilisation might make the entire thing moot anyway, if I get hit by a truck before that happens taking over properly running it would be non-trivial even though it is load-bearing on RD's ability to work. A number of enhancements are needed for things to be easy to pick up
Document prerequisites (mostly github accounts stuff to run the suite against github actual, and shared configuration when running against dummy-central).
Document development dependencies.
Embed the running of mitmproxy and dummy-central in pytest somehow.
tracing how?
if xdist, per-worker or per suite?
Move dummy central to the odoo org as a public repo.
rationalise account setup
add CI for fmt, clippy
add CI for tests
cleanup a bit (though It'll likely be a wip forever, even aside from github being something of a moving target)
maybe add a way to validate the tests against github actual without that being absolute hell, though that seems complicated due to the requirements of a scratch org with multiple accounts, although things should probably be documented at least
Probably document the application logic at least in the rough (the test suite should provide a lot of the non-UX details both intrinsically and via the git log but more formally laying out the concepts would not be remiss).
Specifically document the challenges of and issues with running the test suite against github actual.
Maybe add CI to the mergebot? Not trivial though as it requires postgres (TODO: look into ephemeral PG?), and the runners are 4/4 which would make the local runtime quite long1. Running against github actual is not even a consideration2.
Footnotes
locally the tests take about 20mn at -n12 on a 6/12 laptop, they're around 500~600% CPU but some of the CPU can't be accounted for by time-ing pytest (postgres, dummy-central, mitmproxy) and running too fast can trigger timing issues anyway (even at the current speed it does and the tests are not perfectly reliable) ↩
github runners are limited to 6h runtime3 which would very much require running at full tilt at the very least, which github would not allow for between the rate limits (primary and secondary both) and the necessary delay waiting for webhooks & stuff, plus exposing the internals of the job VM for github to send webhooks to seems pretty crazy ↩
self-hosted runners are limited to 5 days which is a lot more reasonable, but then would require instrumenting jobs in order to better handle secondary rate limits (that is, understand that they happened then backoff and retry) ↩
The text was updated successfully, but these errors were encountered:
While the mergebot doesn't seem likely to require more manpower, and the end of modern civilisation might make the entire thing moot anyway, if I get hit by a truck before that happens taking over properly running it would be non-trivial even though it is load-bearing on RD's ability to work. A number of enhancements are needed for things to be easy to pick up
Footnotes
locally the tests take about 20mn at -n12 on a 6/12 laptop, they're around 500~600% CPU but some of the CPU can't be accounted for by
time
-ingpytest
(postgres, dummy-central, mitmproxy) and running too fast can trigger timing issues anyway (even at the current speed it does and the tests are not perfectly reliable) ↩github runners are limited to 6h runtime3 which would very much require running at full tilt at the very least, which github would not allow for between the rate limits (primary and secondary both) and the necessary delay waiting for webhooks & stuff, plus exposing the internals of the job VM for github to send webhooks to seems pretty crazy ↩
self-hosted runners are limited to 5 days which is a lot more reasonable, but then would require instrumenting jobs in order to better handle secondary rate limits (that is, understand that they happened then backoff and retry) ↩
The text was updated successfully, but these errors were encountered: