Skip to content

Conversation

@sjd210
Copy link
Contributor

@sjd210 sjd210 commented Oct 21, 2025

This is preparatory work for allowing the marks awarded for LLMFreeTextQuestions (and potentially others in the future like Parsons and those using the Python Code Editor) to be displayed external to the question page itself, such as the markbook. This PR by itself should have no visible effect on either site.

For most questions, marks are calculated as 1 if the answer is correct, and 0 if the answer is incorrect. For LLMFreeTextQuestions, marks are derived from the marksAwarded field of the question response. There is currently no straightforward meaningful way to do this for question types like Parsons without reference to the answer scheme, or Inline since it uses multiple responses.

I've tested backwards compatibility and all seems fine since we are not touching the question_attempt JSON object itself. The plan is to eventually phase out use of the correct column entirely, but for now both exist simultaneously.

Edit: Also sets the correctness criteria for LLMFreeTextQuestions to full marks rather than > 0 marks. For now, this may lead to temporary discrepancy for users on the old API but nothing breaking. For the future, we should consider how we can deal with this more broadly. Should we also be adding a maxMarks column to the database, or are we okay extracting it from the question part whenever relevant since it's unchanging?

Since an LLM-marked question may have multiple marks per question rather than a binary correct/incorrect
We previously allowed extracting all attempts for a page, but not for an individual question part. This adds that functionality.
In the context of the markbook/assignment progress, LLMFreeTextQuestionValidationResponses have a marksAwarded field that we need to extract the full question attempt to read, but we want to keep these attempts lightweight for all other question parts to minimise processing unnecessary data.

This change checks each question part and extracts the full response for only LLMFreeText ones.
@codecov
Copy link

codecov bot commented Oct 21, 2025

Codecov Report

❌ Patch coverage is 36.11111% with 23 lines in your changes missing coverage. Please review.
✅ Project coverage is 37.28%. Comparing base (e7c2116) to head (00db082).
⚠️ Report is 4 commits behind head on main.

Files with missing lines Patch % Lines
...m/cl/dtg/isaac/dos/QuestionValidationResponse.java 0.00% 5 Missing and 1 partial ⚠️
...k/ac/cam/cl/dtg/isaac/quiz/PgQuestionAttempts.java 37.50% 5 Missing ⚠️
...c/dao/PgQuizQuestionAttemptPersistenceManager.java 0.00% 4 Missing ⚠️
...l/dtg/isaac/dto/QuestionValidationResponseDTO.java 40.00% 3 Missing ⚠️
.../dto/LLMFreeTextQuestionValidationResponseDTO.java 0.00% 2 Missing ⚠️
...cl/dtg/isaac/dto/FormulaValidationResponseDTO.java 0.00% 1 Missing ⚠️
...am/cl/dtg/isaac/dto/ItemValidationResponseDTO.java 0.00% 1 Missing ⚠️
...l/dtg/isaac/dto/QuantityValidationResponseDTO.java 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #731      +/-   ##
==========================================
- Coverage   37.29%   37.28%   -0.01%     
==========================================
  Files         536      536              
  Lines       23709    23728      +19     
  Branches     2861     2864       +3     
==========================================
+ Hits         8843     8848       +5     
- Misses      13984    13997      +13     
- Partials      882      883       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@sjd210 sjd210 changed the title Propogate the marks awarded for an LLMFreeTextQuestion to gameboards Calculate new "marks" field for question attempts in the database Oct 23, 2025
@sjd210 sjd210 marked this pull request as ready for review October 23, 2025 13:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants