-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Full Commonmark compliance for Lists #2112
Merged
Merged
Changes from 18 commits
Commits
Show all changes
23 commits
Select commit
Hold shift + click to select a range
4482e0c
Rewrite List Tokenizer, Fix incorrect "new" spec tests
calculuschild ec216ae
cleanup
calculuschild 5cfa5de
more cleanup
calculuschild 1697a2b
Merge remote-tracking branch 'upstream/master' into cleanUpLists
calculuschild 26a56fa
Passing all Spec tests
calculuschild 65907ee
Fix some unit tests (lists no longer consume blank lines at end of list)
calculuschild 3dd2aff
Fix more "lists consuming blank lines" unit tests.
calculuschild 4fe6e7c
All unit tests passing!
calculuschild 41f9bde
Lint
calculuschild bb8fe00
Two more commonmark examples fixed
calculuschild d5ba0f3
List and List Items fully Commonmark Compliant!!!
calculuschild 3a409a1
bundles
calculuschild 724ea9e
Replace \h with ' '
calculuschild c934790
Clean comments out of Rules.js
calculuschild 8007f8b
Don't rebuild "next bullet regex" for each line / replace 'match' wit…
calculuschild a79772b
Replace more .match and .exec with .test for speed
calculuschild 7014613
Merge remote-tracking branch 'upstream/master' into cleanUpLists
calculuschild f78d06e
update to commonmark 0.30. Still passing all tests...
calculuschild 7140744
Rebase onto #2124
calculuschild 26c58fd
Merge branch 'cleanUpLists' of https://github.com/calculuschild/marke…
calculuschild 7d31421
Finish rebase
calculuschild 823dccf
lint
calculuschild 2dfda0e
update packaged lib
calculuschild File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -153,7 +153,7 @@ module.exports = class Lexer { | |
src = src.substring(token.raw.length); | ||
lastToken = tokens[tokens.length - 1]; | ||
// An indented code block cannot interrupt a paragraph. | ||
if (lastToken && lastToken.type === 'paragraph') { | ||
if (lastToken && (lastToken.type === 'paragraph' || lastToken.type === 'text')) { | ||
lastToken.raw += '\n' + token.raw; | ||
lastToken.text += '\n' + token.text; | ||
} else { | ||
|
@@ -204,6 +204,10 @@ module.exports = class Lexer { | |
l = token.items.length; | ||
for (i = 0; i < l; i++) { | ||
token.items[i].tokens = this.blockTokens(token.items[i].text, [], false); | ||
if (token.items[i].tokens.some(t => t.type === 'space')) { | ||
token.loose = true; | ||
token.items[i].loose = true; | ||
} | ||
} | ||
tokens.push(token); | ||
continue; | ||
|
@@ -217,13 +221,19 @@ module.exports = class Lexer { | |
} | ||
|
||
// def | ||
if (top && (token = this.tokenizer.def(src))) { | ||
if (token = this.tokenizer.def(src)) { | ||
src = src.substring(token.raw.length); | ||
if (!this.tokens.links[token.tag]) { | ||
this.tokens.links[token.tag] = { | ||
href: token.href, | ||
title: token.title | ||
}; | ||
lastToken = tokens[tokens.length - 1]; | ||
if (lastToken && (lastToken.type === 'paragraph' || lastToken.type === 'text')) { | ||
lastToken.raw += '\n' + token.raw; | ||
lastToken.text += '\n' + token.raw; | ||
} else { | ||
if (!this.tokens.links[token.tag]) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit pick: this could be changed to |
||
this.tokens.links[token.tag] = { | ||
href: token.href, | ||
title: token.title | ||
}; | ||
} | ||
} | ||
continue; | ||
} | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be part of the list tokenizer. If someone wants to extend the list tokenizer to prevent loose lists they should only need to extend the list tokenizer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This relies on the previous line as well because we need to Lex the child list items first. Can I just put the whole section into the Tokenizer? (i.e., is it ok if a Tokenizer handles the lexing of its own children?) And if so, we would need access to the Lexer inside the list Tokenizer via
call
or something like we did in the "extensions" update.Edit: ...except
call
doesn't work here, since usingcall(this, src)
means the Tokenizer can no longer access its own properties sincethis
is now the Lexer.marked/src/Lexer.js
Lines 206 to 210 in c934790
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, determining whether the list is loose is not as simple as checking for blank lines in the list
raw
either, since code blocks for example can contain blank lines that don't count.Kinda dumb, but what about assigning the lexer as a property of its tokenizer object so it can access parsing functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a tough one. I'm tempted to say that it is fine how it is since someone could create an extension that runs before the list tokenizer that consumes the list to create a non-loose list. But this is a slippery slope if we start moving stuff out of the tokenizers.
I'm thinking moving
this.blockTokens
into the tokenizers would be a better solution since it is available in the extensions.Either way I think this will be a breaking change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest it would be nice if all of the tokens that could contain other token would call
blockTokens
orinlineTokens
inside the tokenizer.That should probably be a separate PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In extensions we used
call
to do that, but my impression iscall
is somewhat slower than being able to just call the tokenizer functions directly.Is there some other way to change
this
in the tokenizer functions that you're referring to?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am referring to
call
but if it is slower than we can do something likethis.tokenizer.lexer = this;
I feel like it should be consistent with extensions so maybe we should change extensions as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would we want that PR to refactor all the tokenizers with child tokens or should those all be separate PRs? Changing any of them is probably breaking so maybe all at once, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya all at once. Maybe we should keep this PR as is then change it in that PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I'll see what I can do with that.