-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancements to diff_general #14
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great ideas!
@@ -51,8 +52,19 @@ def fn(self): | |||
expected_err = load_test_file('err') | |||
|
|||
msg = settings.get('msg', "Output did not match expected") | |||
|
|||
diff = difflib.unified_diff(expected_output.splitlines(True), output.splitlines(True), fromfile='expected_output', tofile='actual_output') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to make this an optional feature somehow, because some instructors might not want this, since it could reveal the correct answer to students. This is a great idea though!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it reveals the answer, but it doesn't reveal the input, which I think is key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah true... I'll have to think about this more. I still think it'd be better if it was optional but maybe it's fine by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I had to think about this also. But, I like your idea about having it optional. The problem I foresee is that the script doesn't directly deal with test_generator.py
but rather the JSONTestRunner
(I believe), which makes having it optional through say command line arguments a bit complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah but I think we can pass the arguments to the JSonTestRunner()
ctor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that's a good point. I'd expect to be able to pass new options to JSONTestRunner or via the command line, but thinking about it that might require some changes to gradescope-utils. If that's the case I can take a look into that later!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it works out of the box (e.g., visibility parameter).
@@ -9,4 +9,5 @@ | |||
klass = build_test_class(name) | |||
suite.addTest(klass(TestMetaclass.test_name(name))) | |||
|
|||
JSONTestRunner(visibility='visible').run(suite) | |||
with open('/autograder/results/results.json', 'w') as f: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely better, my only concern is that it makes it slightly more difficult to test the autograder outside of the Gradescope environment, unless you set up the directory structure on your local machine. That's why we initially set it up with output redirection but I've regretted that since then. Perhaps you could make it take an optional command line argument for the path to write the output to, and if the path is supplied, it writes to the file, and otherwise it prints to standard output?
Another option would be to add a flag like --gradescope-output-path=true or something, but then you might have to deal with argparse/optparse or whatever is the current thing, which gets more complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just copied the path from run_autograder
. It's exactly the same. How would it make it difficult on the local machine? Do you not run run_autograder
locally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I got this code from some of your other examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, it is used in some other examples. But if you don't have an /autograder
directory on your local machine this line may throw an error trying to open that file IIRC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, agreed, but I was confused on how that wasn't the case before. But, I guess in the original case, it was easy to remove the redirect. Is that what you were thinking also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I was running run_tests.py
directly rather than run_autograder
to see the output. But actually, I'm also working on something to make it easier to actually run the Docker container locally, but that would require using Docker, so it'd still be better to have a way to run it locally to see how things are working.
@@ -4,4 +4,4 @@ cd /autograder/source | |||
|
|||
bash ./compile.sh | |||
|
|||
python run_tests.py > /autograder/results/results.json | |||
python run_tests.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you make the change suggested below, I'd suggest running the script as
python run_tests.py | |
python run_tests.py /autograder/results/results.json |
diff_general/Makefile
Outdated
@@ -0,0 +1,2 @@ | |||
all: | |||
zip -r autograder.zip * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to do list the files included a little more explicitly like in https://github.com/gradescope/autograder_samples/blob/master/python/src/make_autograder.sh
Otherwise this will pick up random other stuff that you might not want or need, and might try to put the zipfile inside itself or something.
I also slightly prefer using a shell script instead of a Makefile for this, or at least, if using a Makefile it would be better to define dependencies. I haven't used Make in a little bit but I'm thinking that if you run make
twice, it won't update things that were changed unless you explicitly list the dependencies for the target? For that reason I'd prefer just making a shell script 1) for similarity with the other example directory, and 2) just to keep things simple.
BTW I might not have the bandwidth to merge this any time soon so don't feel like you need to make those changes soon or something. I just want to make sure that the example is more generally useful for different people's preferences. Thanks for your contribution! |
zip should sync with the current contents of the directory.
results.json
.