-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make source file valid markdown #4
Comments
I would have preferred plain markdown, but given the range of question types that need to be supported, I couldn't come up with a way to do that without making things much more verbose. For example, Pandoc discards literal list markers, so there would have to be another way to indicate correct answers. If you have suggestions for a way to use Pandoc Markdown for everything, without verbose syntax, I'd be interested in considering that as an alternate input format. For math processing, I'm already extracting all math and processing it into a Canvas rendering URL. If you need that processed in another way instead, that could be added as an alternative to the default. I've expected that in some cases a different approach for LaTeX might be needed, but hadn't actually encountered any yet. What sort of processing does your LaTeX need to give plain HTML or MathML? |
To indicate correct answers, you could just put that info at the beginning of the question text, e.g., instead of You might even use three levels of headers for tests, section, and items. Of course this all depends on what python's Markdown library can do. As for LaTeX processing, I'd need the QTI XML to have LaTeX either approximated as HTML or converted to MathML. E.g., I'd put
ie: ∀x(P(x) → (Q(x) ∧ R(x))). But Pandoc also converts to MathML, so if you'd need more complex math, you could get Pandoc to spit out Markdown with embedded MathML. If your LMS has it and plays along, you can use the equation editor to edit the formula afterward. (Brightspace is happy with the MathML Pandoc generates.) |
…han with a Canvas LaTeX rendering URL (#4); fixed a bug that allowed trailing whitespace to cause incorrect indentation calculations
I'm still not convinced that a pure Markdown solution can avoid being too verbose. Question metadata could be handled by attributes on a header or fenced div. The real issue is keeping the list of possible answers separate from the question itself, since the question can contain arbitrary content including lists. A logical approach would be to use a fenced div for the question, and another fenced div for the list of possible answers, or a header before the question and a fenced div around the answers, or a header before the question and a subheader before the answers. That, plus something like In the last commit, I've added a new option |
Amazing, thank you! |
Suggestions: change or expand the accepted input format to be actually valid markdown. This would have the advantage that you could use other tools, such as pandoc, to preprocess the markdown quiz. As it stands, this can't be done because correct answers are indicated by a
*
at the beginning of the line.text2qti
also balks at answers numbereda.
instead ofa)
.(I'm hoping to make use of test2qti for generating test banks. The source will include lots of LaTeX formulas, but I can't use the images currently provided by
test2qti
. My plan would be to usepandoc
to convert the LaTeX to plain HTML or MathML, and then runtest2qti
on the resulting Markdown.)The text was updated successfully, but these errors were encountered: