-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable tutorial.error.checker to provide checks on error #356
Comments
At the moment, this is the chief reason I'm delaying wiring up an autograder to the primers. |
Plan of action:
|
I endorse Garrett's point about enabling the grading function to handle parsing errors and evaluation errors. Let me describe a strategy that I think would add a lot of flexibility and power to people writing error-checking packages. You can leave the current Create another type of user-input widget. Let's call in it a
NOTE ADDED AS I EXPLORE THE ISSUE MORE ... @garrettgman I wonder if I can implement this just by providing a new function for Leave it to the exercise checker to evaluate any setup code from (3) and the student submission code (1). Learnr should provide functions that allows the checker system to assign specific shiny output widgets to the Learnr should not evaluate the student submission directly. Leave that to the checker code, with learnr providing a safe sandbox in which that checker code can operate. My experience in writing Maybe a short way to describe what I'd like ...
|
…ng a checker function on exercise evaluation errors, closes #356
…ng a checker function on exercise evaluation errors, closes #356
…ng a checker function on exercise evaluation errors, closes #356
…ng a checker function on exercise evaluation errors, closes #356
* empty_results()'s html_output needs to req()-able; otherwise, a value is never actually returned (follow up to #235) * Refactor evaluate_exercise() and add exercise.error.checker for running a checker function on exercise evaluation errors, closes #356 * code review feedback * knitr fig options were actually needed * shorter tempfile pattern * docs * Provide an exercise.error.checker option * Document new exercise.error.checker option * update news
* empty_results()'s html_output needs to req()-able; otherwise, a value is never actually returned (follow up to #235) * Refactor evaluate_exercise() and add exercise.error.checker for running a checker function on exercise evaluation errors, closes #356 * code review feedback * knitr fig options were actually needed * shorter tempfile pattern * docs * Provide an exercise.error.checker option * Document new exercise.error.checker option * update news
The fix in #403 solves a different issue:
In other words, we need to keep the method of checking that evaluates student code to check the result, but we need to supplement it with a routine that spots and handles errors without exiting the checking algorithm. @cpsievert thanks for your hard work, and let me know if I misunderstood #403, because it is very possible. |
#403 addresses both of these needs. Here a link to an example of the former case. And here's an example of the latter case:
|
BTW, I don't know what you mean by checking algorithm, but if it happens to want to be something other than BTW, we also now have access to the error condition object in
If you happen to need a different checking algorithm than |
@cpsievert Thanks for the quick reply. I need something slightly different for the second case. To make it concrete, I need to use both
If the student enters something that produces an error, like Under the current setup, I get the This is what I mean by a "plan b": we first try to check against the result with I'd like to track this as an open issue. But I can open a new issue for it if you prefer. |
I think of the `-check` chunk (and `grade_result()` ) as the grading
equivalent of unit tests. I'm trying to anticipate ahead of time all of the
ways the student can fail, and to provide targeted guidance for each
failure mode. Here, `cars` is not the right answer, but it is a failure
mode that I want to screen for, because the two data set names are easy to
confuse in your head (BTW this is a made up example).
Most "unit tests" will need to test the result of the student code. But the
final "unit test" would always be "was the result an error? If so return
the output of `grade_code()`." The tricky thing is to make sure that _all_
of the tests get applied to the student work in every case.
TL;DR - I want them to get mtcars because that is the solution, but I want
to include a check for the failure mode where they use cars.
…On Fri, Aug 21, 2020 at 7:49 PM Carson Sievert ***@***.***> wrote:
I don't quite follow...why do you have mtcars in the -solution chunk, but
cars in the -check chunk?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#356 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUYFXQLC76OBYXQECSX2TSB4B2LANCNFSM4MQHZ7DA>
.
|
Sounds like this implementation of function(solution_code = NULL, check_code, ...) {
if (is.null(solution_code)) {
return(NULL)
}
grade_learnr(solution_code = solution_code, check_code = "grade_code()", ...)
} This'll make sure function(solution_code = NULL, check_code, ...) {
if (is.null(solution_code)) {
return(NULL)
}
grade_learnr(solution_code = solution_code, check_code = check_code, ...)
} |
…nd -error-check chunks, closes #356
…nd -error-check chunks, closes #356
…nd -error-check chunks, closes #356
…nd -error-check chunks (#426) * Remove exercise.error.checker in favor of exercise.error.check.code and -error-check chunks, closes #356 * Resolve error-check inheritance before checking whether we should check. This way the error check won't be run when 'Run Code' is clicked * Add comment Co-authored-by: Barret Schloerke <barret@rstudio.com> Co-authored-by: Barret Schloerke <barret@rstudio.com>
If a student submits code that throws an error, learnr exits the checking routine and displays the error message. This short circuit prevents the checking code from telling the student anything useful about what might have gone wrong. In short, the student is left to decipher the error message on his or her own.
This is necessary when the checking code relies on the result of the student code (if it throws an error, there is no result). But it is a missed opportunity when the checking code parses the unevaluated student code, as
gradethis::grade_code()
does. Here is an example,grade_code()
would return"I expected mtcars where you wrote mt"
but learnr displays the error message. (code appended at bottom)To fix this, I suggest that we:
tutorials.error.checker
option that can be set to a grading function to run in the event that the student code throws an error. This option can be set in the same manner as thetutorials.exercise.checker
option. It should be set to a function that can grade the student code without evaluating it, similar togradethis::grade_code()
.Code to recreate existing example above
The text was updated successfully, but these errors were encountered: