Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warn about over-qualified actors #86

Open
rcavaz opened this issue Jun 14, 2023 · 7 comments
Open

Warn about over-qualified actors #86

rcavaz opened this issue Jun 14, 2023 · 7 comments
Labels
enhancement New feature or request

Comments

@rcavaz
Copy link

rcavaz commented Jun 14, 2023

In other words, when reaching the end of test scenario, show a warning if an actor didn't used all of its abilities.

What problem would this solve?

I've been thinking for strategies to track feature coverage within a project and I believe Screenplay has potential in offering part of a solution.

Perhaps you might be familiar with the concept of Feature Modeling, which is essentially a way to represent different configurations of a software product.

In screenplay you assign abilities to actors so that they can interact with the system under test, but there's nothing stopping you from sticking to a single actor and assigning him/her with every possible ability in your suite. Hence, an over-qualified actor would be one who didn't use all of it's abilities within a certain scope of tests (so perhaps not just single scenarios).

Having such a warning would at least flag the opportunity for creating new, specialized, actors and facilitate tracking feature coverage.
Also, this would most likely be a plugin rather than a core feature.

Would be great to hear your thoughts and from the community in general.

@perrygoy
Copy link
Member

perrygoy commented Jun 15, 2023

This is an interesting idea; sort of like a Screenplay Pattern auditor? I like the idea of making it a plugin, or some kind of alternate Actor class—that way it can be opt-in.

Maybe we could alter the Actor class to be self-auditing, hidden behind a setting, but i worry about giving the Actor class too much to do. Maybe an Abilities wallet for the Actor which could handle all the Ability interactions? That would provide a nice spot to add this auditing thing. Hmm...

@rcavaz
Copy link
Author

rcavaz commented Jun 15, 2023

Perhaps an alternate Actor class could be the easiest implementation in terms of auditing the use of abilities by keeping a set as a private variable that's updated whenever a call to the attempts_to method is invoked. Then the actual auditing could happen either in the fixture, right after the yield statement and before the actors exit method is called, or even as a cleanup step.

I'll think more about this feature.

@perrygoy
Copy link
Member

perrygoy commented Jun 29, 2023

I just had one of those just-about-to-fall-asleep thoughts about this: We might be able to write an Ability that handles this!

I'm thinking that if you want to verify your Actors aren't overqualified, you give them this Ability as their first Ability. This new Ability has overridden __instancecheck__ and __subclasscheck__ methods which just keep track of which Abilities have been asked for by uses_ability_to (which uses isinstance to find the Ability it's looking for). Then, during the teardown, we can use this Ability to check which Abilities were used during the test and raise a warning for any that weren't used!

I'm guessing something like this, with the ability named ComplainAboutOverqualification as a standin (it's too long, i'd want something shorter) and a Pytest fixture:

import warnings
from typing import Generator

import pytest
from screenpy import AnActor, ComplainAboutOverqualification
from screenpy_selenium import BrowseTheWeb
from screenpy_requests import MakeAPIRequests


@pytest.fixture
def Callie() -> Generator:
    the_actor = AnActor.named(Callie).who_cah(
        ComplainAboutOverqualification(),
        BrowseTheWeb.using_firefox(),
        MakeAPIRequests(),
    )
    
    yield the_actor

    # All of the below could be done in a new Action, like CheckQualifications or something
    abilities_used = the_actor.uses_ability_to(ComplainAboutOverqualification).to_list_abilities_used()  # ?
    unused_abilities = set(the_actor.abilities) - abilities_used
    if unused_abilities:
        msg = f"Callie did not use {', '.join(map(str, unused_abilities)}." 
        warnings.warn(msg, ResourceWarning)

Things i like about this approach:

  • The opt-in is apparent!
  • It's got clear intention behind it
  • The only changes you need to make are to give a new Ability to your existing Actor(s) and do a check during teardown
  • Used in this way with Pytest fixtures, It will do this check in each individual test, which means that it will call out in which tests specifically Callie is overqualified

Things i don't like:

  • You do have to do quite a bit of extra work
  • I wish i could use the Ability's forget method to do this work automatically, so the only step needed is to give the Actor the new Ability (it won't have access to the Actor, so this isn't an option)

What are your thoughts? Is this just a half-asleep fever dream and it's not as cool as it seems?

@bandophahita
Copy link
Contributor

I've been thinking for strategies to track feature coverage within a project

Would a code coverage tool provide this?

@perrygoy
Copy link
Member

perrygoy commented Sep 19, 2023

Just to bring back some learnings—

I tried to implement my just-before-sleeping approach above using __instancecheck__ from PEP 3119, but i had misunderstood how it works. That dunder method is only called for the class that's being compared, not the instance. The instance only seems to have its __class__ checked, so there's no way for an instance to know what class it's being compared to.

It doesn't look like there's a way to track this information without a new Actor class who does it for you. This could be a pretty simple drop-in, though, thanks to all the work @bandophahita has already done with the classmethod and self-typing.

import warnings
from collections import defaultdict

from screenpy import Actor
from screenpy.actor import T_Ability


class SelfAuditingActor(Actor):
    def uses_ability_to(self, ability: Type[T_Ability]) -> T_Ability:
        """Track the ability requested before supplying it."""
        self.used_abilities[ability] += 1
        return super().uses_ability_to(ability)

    def exit(self) -> None:
        """Clean up, warn about any unused abilities."""
        for ability, num_uses in self.used_abilities.items():
            if num_uses == 0:
                msg = f"{self.name} did not use {', '.join(map(str, unused_abilities)}." 
                warnings.warn(msg, ResourceWarning)
        super().exit()

    def __init__(self, name: str) -> None:
        self.used_abilities = defaultdict(int)
        super().__init__(name)

I guess the question is, where would we put that? Should we have some kind of screenpy_doctor or screenpy_audit offshoot repo that has some helpful diagnostic Abilities/Actions/Questions?

@bandophahita
Copy link
Contributor

where would we put that?

Recipes?

@perrygoy
Copy link
Member

Yeah, recipes is probably a good place to start it off. If we end up collecting more useful ways to audit your ScreenPy suite, then we can re-assess whether we branch it off into its own repo with specialized tools.

How does that sound @rcavaz? Does the code above get you what you need?

@perrygoy perrygoy added the enhancement New feature or request label Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants