-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] adding additional tests for notebooks #117
base: main
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## main #117 +/- ##
==========================================
+ Coverage 95.40% 95.84% +0.43%
==========================================
Files 26 26
Lines 1371 1371
Branches 192 192
==========================================
+ Hits 1308 1314 +6
+ Misses 34 30 -4
+ Partials 29 27 -2 |
…d to count, added more tests for mnist_convolutional, fixed empty code cell in neural_network_formulations notebook, added nbformat to list of packages to be installed for testing
mnist_convolutional and neural_network_formulations notebook have empty code cells at the bottom which cause errors |
…a few other things
Notebook tests (Laird - to merge with kalset1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some broad comments. Let's do another pass and review again.
|
||
# return testbook for given notebook | ||
def open_book(folder, notebook_fname, **kwargs): | ||
execute = kwargs.get("execute", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use named arguments instead of **kwargs in these methods? Actual arguments provide better documentation for someone using the functions.
with testbook(notebook_fname, timeout=300, execute=True) as tb: | ||
assert tb.code_cells_executed == n_cells | ||
os.chdir(cwd) | ||
book = testbook(notebook_fname, execute=execute, timeout=300) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know if this timeout is sufficiently large (but not too large)?
|
||
|
||
# checks for correct type and number of layers in a model | ||
def check_layers(tb, activations, network): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better if the function names match more what the function does (rather than what it is used for). This function actually injects code into the notebook. Maybe "inject_activation_check"
|
||
|
||
# counting number of cells | ||
def cell_counter(notebook_fname, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar thought here as above. How about "get_cell_count"
# gets model stats for mnist notebooks | ||
def mnist_stats(tb, fname): | ||
total_cells = cell_counter(fname) | ||
tb.inject("test(model, test_loader)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this line doing? Can we add some more comments that describe what these (and other) lines are doing?
|
||
|
||
# neural network formulation notebook helper | ||
def neural_network_checker(tb, ref_string, val1, val2, tolerance): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment here. We need some comments to help understand what these are doing.
|
||
# check loss of model | ||
model_loss = tb.ref("nn.evaluate(x, y)") | ||
assert model_loss == pytest.approx(0.000389626, abs=0.00031) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model loss will not be consistent enough to check this way. I think we should just check if it is at least as small as some acceptable tolerance - just so we know if the network begins training more poorly for some reason.
assert model_loss == pytest.approx(0.00024207, abs=0.00021) | ||
|
||
# check layers of model | ||
layers = ["sigmoid", "sigmoid", "sigmoid", "sigmoid", "linear"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think these check layers are needed. More important that the numbers from OMLT are still good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I think that makes sense. Do you think checking the layers of the imported models (the ones that are built first with pytorch and then converted using "load_onnx_neural_network_with_bounds") still makes sense? Or is that not needed
changing the solver in notebooks
Legal Acknowledgement
By contributing to this software project, I agree my contributions are submitted under the BSD license.
I represent I am authorized to make the contributions and grant the license.
If my employer has rights to intellectual property that includes these contributions,
I represent that I have received permission to make contributions and grant the required license on behalf of that employer.