Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pending state for results #3485

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions tmt/result.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ class ResultOutcome(enum.Enum):
WARN = 'warn'
ERROR = 'error'
SKIP = 'skip'
PENDING = 'pending'

@classmethod
def from_spec(cls, spec: str) -> 'ResultOutcome':
Expand All @@ -54,6 +55,7 @@ def reduce(outcomes: list['ResultOutcome']) -> 'ResultOutcome':
ResultOutcome.PASS,
ResultOutcome.INFO,
ResultOutcome.SKIP,
ResultOutcome.PENDING,
)

for outcome in outcomes_by_severity:
Expand Down Expand Up @@ -114,6 +116,8 @@ def normalize(
ResultOutcome.ERROR: 'magenta',
# TODO (happz) make sure the color is visible for all terminals
ResultOutcome.SKIP: 'bright_black',
# TODO (mvadkert) make sure the color is visible for all terminals
ResultOutcome.PENDING: 'dark_white',
}


Expand Down Expand Up @@ -500,6 +504,10 @@ def summary(results: list['Result']) -> str:
if stats.get(ResultOutcome.ERROR):
count, comment = fmf.utils.listed(stats[ResultOutcome.ERROR], 'error').split()
comments.append(count + ' ' + click.style(comment, fg='magenta'))
if stats.get(ResultOutcome.PENDING):
count, comment = fmf.utils.listed(
stats[ResultOutcome.PENDING], 'pending', plural='pending').split()
comments.append(count + ' ' + click.style(comment, fg='bright_black'))
# FIXME: cast() - https://github.com/teemtee/fmf/issues/185
return cast(str, fmf.utils.listed(comments or ['no results found']))

Expand Down
1 change: 1 addition & 0 deletions tmt/schemas/common.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -469,6 +469,7 @@ definitions:
result_outcome:
type: string
enum:
- pending
- pass
- fail
- info
Expand Down
11 changes: 11 additions & 0 deletions tmt/steps/discover/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import dataclasses
from collections.abc import Iterator
from itertools import zip_longest
from typing import TYPE_CHECKING, Any, Optional, TypeVar, cast

import click
Expand All @@ -17,6 +18,7 @@
import tmt.utils
from tmt.options import option
from tmt.plugins import PluginRegistry
from tmt.result import Result, ResultOutcome
from tmt.steps import Action
from tmt.utils import GeneralError, Path, field, key_to_option

Expand Down Expand Up @@ -415,6 +417,15 @@ def go(self, force: bool = False) -> None:
test.serial_number == result.serial_number):
self._failed_tests[test_phase].append(test)

# Initialize results with pending outcome if no results for it available
results = self.plan.execute.results()
for index, (test, result) in enumerate(zip_longest(self.tests(), results)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it won't be as easy as zip(). Results may be listed in an arbitrary order - yes, results are added once their tests finish, but I wouldn't rely on it, and there is certainly no code enforcing the order.

Check the Execute.results_for_tests() which lists results and test pairs, you should be interested in creating new results for pairs where the test is set and the result is None, that's a test that has not run yet.

if result is None:
results.insert(index, Result(name=test.name, result=ResultOutcome.PENDING))

# save the results
self.plan.execute._save_results(results)

# Give a summary, update status and save
self.summary()
self.status('done')
Expand Down
12 changes: 9 additions & 3 deletions tmt/steps/execute/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -1129,9 +1129,11 @@ def go(self, force: bool = False) -> None:
""" Execute tests """
super().go(force=force)

# Clean up possible old results
# Clean up possible old results, by switching them to pending state
if force:
self._results.clear()
for index, result in enumerate(self._results):
if result.result != ResultOutcome.PENDING:
self._results[index] = Result(name=result.name, result=ResultOutcome.PENDING)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say this should recreate the list from scratch, i.e. not overwriting the existing results. Let's say there was a test with custom results, creating 100 of results for that single test. Before running the test for the first time, there would be a single PENDING placeholder for the test; after the operation above, results.yaml may contain 100 results for that test, each with PENDING results. And since tmt does not control creation of these results, the test may easily decide to not produce several of them this time, and therefore that particular result should never appear in our results.yaml. Dropping all results except the single PENDING one per test seems like better option to me.


if self.should_run_again:
self.status('todo')
Expand Down Expand Up @@ -1201,7 +1203,11 @@ def go(self, force: bool = False) -> None:
# which would make results appear several times. Instead, we can reach
# into the original plugin, and use it as a singleton "entry point" to
# access all collected `_results`.
self._results += execute_phases[0].results()
#
# Go through all results and replace those which were pending
for index, result in enumerate(execute_phases[0].results()):
if self._results[index].result == ResultOutcome.PENDING:
self._results[index] = result
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, this is weird. I think it will hit custom results again: for a single test, there may be 100 results in execute_phases[0].results() but not in `self._results. They would then land all in the very same index.

I believe we need a more robust approach here, based on matching the result's test serial numbers, constructing the list from two sources, execute_phases[0] and self._results. I'd try walking self._results, and find out whether there are any results in execute_phases[0].results()for the same test: if there are, add them to futureself._results, if there are not, copy the pending one. Eventually, I'd expect to be left with consumed execute_phase[0].results(), because all of those are valid results, and possibly not consumed self._resultsbecause some were moved to the new list, and some didn't run and remainPENDING`. Those I would add to the new list as well.


# To separate "execute" from the follow-up logging visually
self.info('')
Expand Down
Loading