-
Couldn't load subscription status.
- Fork 7
Calculate new "marks" field for question attempts in the database #731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Since an LLM-marked question may have multiple marks per question rather than a binary correct/incorrect
We previously allowed extracting all attempts for a page, but not for an individual question part. This adds that functionality.
In the context of the markbook/assignment progress, LLMFreeTextQuestionValidationResponses have a marksAwarded field that we need to extract the full question attempt to read, but we want to keep these attempts lightweight for all other question parts to minimise processing unnecessary data. This change checks each question part and extracts the full response for only LLMFreeText ones.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #731 +/- ##
==========================================
- Coverage 37.29% 37.28% -0.01%
==========================================
Files 536 536
Lines 23709 23728 +19
Branches 2861 2864 +3
==========================================
+ Hits 8843 8848 +5
- Misses 13984 13997 +13
- Partials 882 883 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This is preparatory work for allowing the marks awarded for
LLMFreeTextQuestions(and potentially others in the future like Parsons and those using the Python Code Editor) to be displayed external to the question page itself, such as the markbook. This PR by itself should have no visible effect on either site.For most questions, marks are calculated as 1 if the answer is correct, and 0 if the answer is incorrect. For
LLMFreeTextQuestions, marks are derived from themarksAwardedfield of the question response. There is currently no straightforward meaningful way to do this for question types like Parsons without reference to the answer scheme, or Inline since it uses multiple responses.I've tested backwards compatibility and all seems fine since we are not touching the
question_attemptJSON object itself. The plan is to eventually phase out use of thecorrectcolumn entirely, but for now both exist simultaneously.Edit: Also sets the correctness criteria for
LLMFreeTextQuestionsto full marks rather than > 0 marks. For now, this may lead to temporary discrepancy for users on the old API but nothing breaking. For the future, we should consider how we can deal with this more broadly. Should we also be adding amaxMarkscolumn to the database, or are we okay extracting it from the question part whenever relevant since it's unchanging?