-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(test): implement test command #724
feat(test): implement test command #724
Conversation
The tests are failing because gherkin-parser doesn't exist in reana-commons yet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I have left just a few small comments.
This PR also needs to be rebased ;)
Various changes addressing review comments.
b231b90
to
d052838
Compare
Various changes addressing review comments.
Also avoided mocking api client in favour of the functions directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Nice job! 🚀
Various changes addressing review comments.
Also avoided mocking api client in favour of the functions directly.
ba5d5c3
to
7232d73
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! CI should pass after installing a new version of reana-commons
tests/test_cli_test.py
Outdated
"test_analysis.feature", | ||
], | ||
) | ||
assert "Using test file test_analysis.feature" in result.output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tested a few combinations of client and server and test files, and things seem to work nicely 👍
However I have some cosmetic comments to the output formatting.
Currently, when a test fails, the output looks like:
Using test file tests/serial/log-messages.feature
[ERROR] Scenario failed! Failed testcase: the job logs for the "gendata" step should contain "datasets
--------
ZRooDataSet::modelData(x)"
[ERROR] Error log: The logs do not contain datasets
--------
ZRooDataSet::modelData(x)!
Summary of tests/serial/log-messages.feature:
Tested The workflow start has produced the expected messages: passed
Tested The generation step has produced the expected messages: failed
Tested The fitting step has produced the expected messages: passed
Using test file tests/serial/run-duration.feature
Summary of tests/serial/run-duration.feature:
Tested The workflow terminates in a reasonable amount of time: passed
Tested The data generation step terminates in a reasonable amount of time: passed
Tested The fitting step terminates in a reasonable amount of time: passed
Using test file tests/serial/workspace-files.feature
Summary of tests/serial/workspace-files.feature:
Tested The workspace contains the expected input files: passed
Tested The generation step creates the appropriate data file: passed
Tested The workflow generates the correct final plot: passed
Tested The total workspace size remains within reasonable limits: passed
Some observations on the above:
-
Note that for multi-line values, it is not clear when the expected text starts and ends. In the first ERROR message, the text is well delimited by quotes. In the second ERROR message, it is not. It would be nice to make the second error message behave like the first one, for consistency.
-
BTW the second ERROR message basically just repeats what the first ERROR message already reported; do we need it?
-
Regarding the verbiage, we could perhaps make some changes e.g. "using test file" may not be clear (so rather "testing..." rather than "using..."), e.g. "Tested The workspace..." and "Scenario failed" may not be clear that they both refer to scenarios (so rather "Scenario..." than "Tested..."), etc. See proposal below.
-
Also, the output formatting could emulate what
reana-client validate
is outputting with respect to headings and errors/failures, so that researchers would "feel at home" with this new command. See proposal below.
What about emitting the following output? (errors either displayed all first before summary, or inline within respective scenarios and users will just grep for '->')
[ERROR] Scenario "The generation step has produced the expected messages" failed!
Failed testcase: the job logs for the "gendata" step should contain "datasets
--------
ZRooDataSet::modelData(x)"
==> Testing file "tests/serial/log-messages.feature"
-> SUCCESS: Scenario "The workflow start has produced the expected messages"
-> ERROR: Scenario "The generation step has produced the expected messages"
-> SUCCESS: Scenario "The fitting step has produced the expected messages"
==> Testing file "tests/serial/workspace-files.feature"
-> SUCCESS: Scenario "The workspace contains the expected input files"
-> SUCCESS: Scenario "The generation step creates the appropriate data file"
-> SUCCESS: Scenario "The workflow generates the correct final plot"
-> SUCCESS: Scenario "The total workspace size remains within reasonable limits"
6 passed, 1 failed in 30s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the errors, I agree that the second error message can be redundant, especially in tests that are about "presence of files" or "presence of log messages". The first is the step that causes the error, and the second is the reasoning, but for these cases these two ideas coincide.
This is a case where I could see some utility, but I'm not sure if it's worth keeping both (maybe only the reasoning is needed?). It tells you which line in your gherkin file was the problem, and then the precise reason.
[ERROR] Scenario failed! Failed testcase: the workflow run duration should be less than 1 minutes
[ERROR] Error log: The workflow took more than 1 minutes to complete! Run duration: 4.133333333333334 minutes
1f1e76f
to
5abb241
Compare
5abb241
to
79d0483
Compare
Closes #723