-
Notifications
You must be signed in to change notification settings - Fork 696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unit tests for pallet EPM-MB #6332
base: gpestana/pallet-epm-mb
Are you sure you want to change the base?
Unit tests for pallet EPM-MB #6332
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments and nits, especially around merging test cases that are similar and simplifying the test code.
Other than that, this is great, thanks for the help!
substrate/frame/election-provider-multi-block/src/signed/tests.rs
Outdated
Show resolved
Hide resolved
// let some_bn = 0; | ||
// let some_page_index = 0; | ||
|
||
let account_id = 99; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit but since the account_id is not so important for the tests, no need to define it as a separate var. Just use 99 directly in the calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to use a variable than a number that looks a little magical. This way wherever this variable is used, one can see what it actually represents.
substrate/frame/election-provider-multi-block/src/signed/tests.rs
Outdated
Show resolved
Hide resolved
substrate/frame/election-provider-multi-block/src/signed/tests.rs
Outdated
Show resolved
Hide resolved
} | ||
|
||
#[test] | ||
fn force_clear_submission_fails_if_called_by_account_none() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would re-factor this test to check that all origins are correct when calling bail. I.e.
- Origin none fails
- Origin signed fails if there's no registration
- Origin signed is OK if there's a registration
- etc
This would cover all the cases and no need for the test above (bail_while_having_no_submissions_does_not_modify_balances
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a test for force_clear but I guess you mean the one for bail.
Anyway, if I understand correctly you would put all the test cases for "bail" function in one test function.
If yes, then I would be against it. It would surely provide the coverage we need and verify all we need but it would result in overloaded tests, checking lots of things, instead of many tests, each checking very specific outcome.
So let me understand your reasoning here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick 2-cents from me: in this case, testing different origins is a simple procedure, so I get @Zebedeusz's point of having different #[test]
s
If testing different origins required a (more or less) similar setup everytime, doing it all in the same test would make more sense as @gpestana suggests.
Example: a verbose integration test for Westend's people parachain checking for different wrong origins.
} | ||
|
||
#[test] | ||
fn force_clear_submission_fails_if_submitter_done_no_submissions_at_all() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is useful but I'd try to integrate more test cases to cover all the possible edge cases in one test alone. This will help to keep the number of tests low and cover everything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a specific reason why you're suggesting the approach of having few tests with lots of checks inside each one?
My preferred approach is exactly the opposite one, so I'd like to understand your point of view.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reasons why I'd merge these small tests cases are:
- Simplify - we can re-use setup code across tests cases;
- Context - the tests are basically testing the same unit (the force_clear) behaviour
- Follow-along/readability - it's easier to undestand all the invariants of
force_clear
etc if they are all in the same test, we can use comments to explain what is being tested and why errors happen, etc - Low verbosity - especially due to 1.
See, for example, this test case in staking: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1283
We test within a test all the flow of Call::unbound
. We could split it in different tests but the setup/boilerplate code would be repeated all over the individual tests. Also through that test you can read and understand the whole unbound flow, requirements, invariants, etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1495 being another good example, where assert_noop!
is used too to ensure that errors do not change the storage root (and state).
substrate/frame/election-provider-multi-block/src/signed/tests.rs
Outdated
Show resolved
Hide resolved
}) | ||
} | ||
|
||
#[test] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test could be merged with the above test. Calling it get_wrong_solution_page_works
and then testing all the cases where the None is returned due to bad page index is a good call !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above. I'm not in favour for merged scenarios so let's discuss that bigger topic first
@@ -66,6 +67,14 @@ pub type T = Runtime; | |||
pub type Block = frame_system::mocking::MockBlock<Runtime>; | |||
pub(crate) type Solver = SequentialPhragmen<AccountId, sp_runtime::PerU16, ()>; | |||
|
|||
pub struct Weighter; | |||
|
|||
impl Get<RuntimeDbWeight> for Weighter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need this? The type DbWeight
can be set to unit in the configs in test mocks, ie.
type DbWeight = ();
@@ -80,6 +89,7 @@ frame_election_provider_support::generate_solution_type!( | |||
impl frame_system::Config for Runtime { | |||
type Block = Block; | |||
type AccountData = pallet_balances::AccountData<Balance>; | |||
type DbWeight = Weighter; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type DbWeight = Weighter; | |
type DbWeight = (); |
self | ||
} | ||
|
||
pub(crate) fn no_desired_targets(self) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this? I think we should remove this, also setting desired targets to Err("none")
doesn't seem correct to me.
}) | ||
} | ||
|
||
#[test] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so: given there are existing submissions, if someone bails from a submission that they didn't do, then the balances of all the participants, the one that had a submission and especially the one that bailed but didn't have one will remain unchanged.
This can be tested using the assert_noop!
macro. Basically, when assert_noop!(call(), Error)
, the macro checks a couple things:
- The
call()
will return the expected `Error - The Ext storage was not changed after calling
call()
. this means that the balances, and other states remain the same.
I'd just remove the whole set up of the registration etc and call an assert_noop!(bail_for(1), Err)
. This tests that the bailing does not work in this case and the storage root remained the same, so no state has changed (+ simplify the tests considerably).
} | ||
|
||
#[test] | ||
fn bail_while_having_no_submissions_does_not_modify_balances() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, assert_noop!()
will simplify the test case, make it more comprehensive and simpler.
} | ||
|
||
#[test] | ||
fn force_clear_submission_fails_if_submitter_done_no_submissions_at_all() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reasons why I'd merge these small tests cases are:
- Simplify - we can re-use setup code across tests cases;
- Context - the tests are basically testing the same unit (the force_clear) behaviour
- Follow-along/readability - it's easier to undestand all the invariants of
force_clear
etc if they are all in the same test, we can use comments to explain what is being tested and why errors happen, etc - Low verbosity - especially due to 1.
See, for example, this test case in staking: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1283
We test within a test all the flow of Call::unbound
. We could split it in different tests but the setup/boilerplate code would be repeated all over the individual tests. Also through that test you can read and understand the whole unbound flow, requirements, invariants, etc
} | ||
|
||
#[test] | ||
fn force_clear_submission_fails_if_submitter_done_no_submissions_at_all() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1495 being another good example, where assert_noop!
is used too to ensure that errors do not change the storage root (and state).
No description provided.