-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancements to diff_general #14
base: master
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -56,3 +56,4 @@ docs/_build/ | |
|
||
# PyBuilder | ||
target/ | ||
autograder.zip |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
all: | ||
zip -r autograder.zip * | ||
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -4,4 +4,4 @@ cd /autograder/source | |||||
|
||||||
bash ./compile.sh | ||||||
|
||||||
python run_tests.py > /autograder/results/results.json | ||||||
python run_tests.py | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If you make the change suggested below, I'd suggest running the script as
Suggested change
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,4 +9,5 @@ | |
klass = build_test_class(name) | ||
suite.addTest(klass(TestMetaclass.test_name(name))) | ||
|
||
JSONTestRunner(visibility='visible').run(suite) | ||
with open('/autograder/results/results.json', 'w') as f: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is definitely better, my only concern is that it makes it slightly more difficult to test the autograder outside of the Gradescope environment, unless you set up the directory structure on your local machine. That's why we initially set it up with output redirection but I've regretted that since then. Perhaps you could make it take an optional command line argument for the path to write the output to, and if the path is supplied, it writes to the file, and otherwise it prints to standard output? Another option would be to add a flag like --gradescope-output-path=true or something, but then you might have to deal with argparse/optparse or whatever is the current thing, which gets more complicated. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just copied the path from There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, I got this code from some of your other examples. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, it is used in some other examples. But if you don't have an There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, agreed, but I was confused on how that wasn't the case before. But, I guess in the original case, it was easy to remove the redirect. Is that what you were thinking also? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh I was running |
||
JSONTestRunner(stream=f, verbosity=2, visibility='visible', stdout_visibility='visible').run(suite) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,6 +5,7 @@ | |
from subprocess32 import PIPE | ||
from gradescope_utils.autograder_utils.decorators import weight | ||
import yaml | ||
import difflib | ||
|
||
BASE_DIR = './test_data' | ||
|
||
|
@@ -51,8 +52,19 @@ def fn(self): | |
expected_err = load_test_file('err') | ||
|
||
msg = settings.get('msg', "Output did not match expected") | ||
|
||
diff = difflib.unified_diff(expected_output.splitlines(True), output.splitlines(True), fromfile='expected_output', tofile='actual_output') | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd prefer to make this an optional feature somehow, because some instructors might not want this, since it could reveal the correct answer to students. This is a great idea though! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, it reveals the answer, but it doesn't reveal the input, which I think is key. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah true... I'll have to think about this more. I still think it'd be better if it was optional but maybe it's fine by default. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, I had to think about this also. But, I like your idea about having it optional. The problem I foresee is that the script doesn't directly deal with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah but I think we can pass the arguments to the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah that's a good point. I'd expect to be able to pass new options to JSONTestRunner or via the command line, but thinking about it that might require some changes to gradescope-utils. If that's the case I can take a look into that later! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it works out of the box (e.g., visibility parameter). |
||
|
||
if diff is not None: | ||
msg += "\nThe differences between expected standard output and actual standard output are as follows:\n" + ''.join(diff) | ||
|
||
self.assertEqual(expected_output, output, msg=msg) | ||
if expected_err is not None: | ||
diff = difflib.unified_diff(expected_err.splitlines(True), err.splitlines(True), fromfile='expected_err', tofile='actual_err') | ||
|
||
if diff is not None: | ||
msg += "\nThe differences between expected standard error and actual standard error are as follows:\n" + ''.join(diff) | ||
|
||
self.assertEqual(expected_err, err, msg=msg) | ||
fn.__doc__ = 'Test {0}'.format(dir_name) | ||
return fn | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to do list the files included a little more explicitly like in https://github.com/gradescope/autograder_samples/blob/master/python/src/make_autograder.sh
Otherwise this will pick up random other stuff that you might not want or need, and might try to put the zipfile inside itself or something.
I also slightly prefer using a shell script instead of a Makefile for this, or at least, if using a Makefile it would be better to define dependencies. I haven't used Make in a little bit but I'm thinking that if you run
make
twice, it won't update things that were changed unless you explicitly list the dependencies for the target? For that reason I'd prefer just making a shell script 1) for similarity with the other example directory, and 2) just to keep things simple.