Adds a hook for specifying an alternative to learnr:::evaluate_exercise() #386
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Per issue 356, this proposed change adds a way to replace the built-in exercise code evaluator (
learnr:::evaluate_exercise
) with one of the tutorial author's choosing. This is accomplished by adding a global option,learnr.alt.evaluator
that can be set to a string naming the evaluator to be used. By default (that is, if the optionlearnr.alt.evaluator
is NULL), the built in code evaluator is used, so there is absolutely no change to the learnr code evaluation process.Being able to specify an alternative evaluator allows more seamless parse-time checking and provides a means to evaluate the exercise code while doing the code checking, rather than the method in learnr where the code is evaluated and the results handed off to the checker.
Structure of change
I attempted to minimize the extent of changes to the code base by using
options()
rather thantutorialOptions()
to set the name of the alternative evaluator. It would be perhaps more natural to the user to set the alternative usingtutorialOptions()
, but this requires additionalchanges to the code base. (And, since
tutorialOptions()
involves knitr hooks, I'm not confident that I would do this properly.)I do not know enough about the new remote evaluators to determine if there is any impact on that.
Impact of change
I am already using an alternative evaluator for a large set of tutorials I am developing. The evaluator is published as a package at
github.com/dtkaplan/learnrAlt
. The evaluator is compatible with the facilities of thegradethis
package. I cannot predict whether there will be many people who want to write an alternative evaluator, but mine can serve as a framework for others to work on parse-time checkers or alternative graders. @garrettgman may have some opinion about this.Pull Request
Add an entry to NEWS concisely describing what you changed. DONE.
Add unit tests in the tests/testthat directory. NOT DONE. But see the test below.
Run Build->Check Package in the RStudio IDE, or
devtools::check()
, to make sure your change did not add any messages, warnings, or errors. DONEMinimal reproducible example
See the attached learnr Rmd file, which contains a trivial alternative evaluator. For a complete working evaluator, see the
github.com/dtkaplan/learnrAlt
repository.alt-eval.Rmd.zip