Replies: 2 comments 1 reply
-
+1 I like the first one where the report can use the json from the last run as an input then in the UI we can show the difference (ie a test that passed is now failing). |
Beta Was this translation helpful? Give feedback.
0 replies
-
Good ideas. Right now we are just running the reports weekly and then saving a comparison JSON for just the week over week test delta. Adding this to the UI with more robustness would be awesome. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As I want to use the tests for performing ISO certification, I want to have all tests that are succeeded be "marked" somehow (like implemented/in place) so that the next run we can identify failed controls.
A solution could be to save the succeeded test ID's to a json that can be used as the baseline, which would allow easy editing as well.
Another way to do this is to create Custom tests, and cherry pick each test and copy it over.
Disadvantages is that this requires a lot of reverse engineering and diving into each test detail. This will be different for some organization to implement.
A different way would be to add a tag to each test with the implemented/not implemented value, but again that seems very labour intensive.
A fourth way would be to have a json with all tags that need to be run, so a test can be run on tag id, then I could add a specific tag to each test to map to ISO controls (this is a one time large job, and not exactly 100% possible) and then these tags could be used to run the specified tests.
Beta Was this translation helpful? Give feedback.
All reactions