Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancements to diff_general #14

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions diff_general/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -56,3 +56,4 @@ docs/_build/

# PyBuilder
target/
autograder.zip
2 changes: 2 additions & 0 deletions diff_general/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
all:
zip -r autograder.zip *
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to do list the files included a little more explicitly like in https://github.com/gradescope/autograder_samples/blob/master/python/src/make_autograder.sh

Otherwise this will pick up random other stuff that you might not want or need, and might try to put the zipfile inside itself or something.

I also slightly prefer using a shell script instead of a Makefile for this, or at least, if using a Makefile it would be better to define dependencies. I haven't used Make in a little bit but I'm thinking that if you run make twice, it won't update things that were changed unless you explicitly list the dependencies for the target? For that reason I'd prefer just making a shell script 1) for similarity with the other example directory, and 2) just to keep things simple.

2 changes: 1 addition & 1 deletion diff_general/run_autograder
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ cd /autograder/source

bash ./compile.sh

python run_tests.py > /autograder/results/results.json
python run_tests.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you make the change suggested below, I'd suggest running the script as

Suggested change
python run_tests.py
python run_tests.py /autograder/results/results.json

3 changes: 2 additions & 1 deletion diff_general/run_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@
klass = build_test_class(name)
suite.addTest(klass(TestMetaclass.test_name(name)))

JSONTestRunner(visibility='visible').run(suite)
with open('/autograder/results/results.json', 'w') as f:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely better, my only concern is that it makes it slightly more difficult to test the autograder outside of the Gradescope environment, unless you set up the directory structure on your local machine. That's why we initially set it up with output redirection but I've regretted that since then. Perhaps you could make it take an optional command line argument for the path to write the output to, and if the path is supplied, it writes to the file, and otherwise it prints to standard output?

Another option would be to add a flag like --gradescope-output-path=true or something, but then you might have to deal with argparse/optparse or whatever is the current thing, which gets more complicated.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just copied the path from run_autograder. It's exactly the same. How would it make it difficult on the local machine? Do you not run run_autograder locally?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I got this code from some of your other examples.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, it is used in some other examples. But if you don't have an /autograder directory on your local machine this line may throw an error trying to open that file IIRC?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, agreed, but I was confused on how that wasn't the case before. But, I guess in the original case, it was easy to remove the redirect. Is that what you were thinking also?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I was running run_tests.py directly rather than run_autograder to see the output. But actually, I'm also working on something to make it easier to actually run the Docker container locally, but that would require using Docker, so it'd still be better to have a way to run it locally to see how things are working.

JSONTestRunner(stream=f, verbosity=2, visibility='visible', stdout_visibility='visible').run(suite)
12 changes: 12 additions & 0 deletions diff_general/test_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from subprocess32 import PIPE
from gradescope_utils.autograder_utils.decorators import weight
import yaml
import difflib

BASE_DIR = './test_data'

Expand Down Expand Up @@ -51,8 +52,19 @@ def fn(self):
expected_err = load_test_file('err')

msg = settings.get('msg', "Output did not match expected")

diff = difflib.unified_diff(expected_output.splitlines(True), output.splitlines(True), fromfile='expected_output', tofile='actual_output')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to make this an optional feature somehow, because some instructors might not want this, since it could reveal the correct answer to students. This is a great idea though!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it reveals the answer, but it doesn't reveal the input, which I think is key.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah true... I'll have to think about this more. I still think it'd be better if it was optional but maybe it's fine by default.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I had to think about this also. But, I like your idea about having it optional. The problem I foresee is that the script doesn't directly deal with test_generator.py but rather the JSONTestRunner (I believe), which makes having it optional through say command line arguments a bit complicated.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah but I think we can pass the arguments to the JSonTestRunner() ctor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that's a good point. I'd expect to be able to pass new options to JSONTestRunner or via the command line, but thinking about it that might require some changes to gradescope-utils. If that's the case I can take a look into that later!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it works out of the box (e.g., visibility parameter).


if diff is not None:
msg += "\nThe differences between expected standard output and actual standard output are as follows:\n" + ''.join(diff)

self.assertEqual(expected_output, output, msg=msg)
if expected_err is not None:
diff = difflib.unified_diff(expected_err.splitlines(True), err.splitlines(True), fromfile='expected_err', tofile='actual_err')

if diff is not None:
msg += "\nThe differences between expected standard error and actual standard error are as follows:\n" + ''.join(diff)

self.assertEqual(expected_err, err, msg=msg)
fn.__doc__ = 'Test {0}'.format(dir_name)
return fn
Expand Down