-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add beforeEmbed and afterEmbed hook #1571
Changes from all commits
e6e346e
4f69c60
5436ccd
5602d1e
42b915e
1ace9a1
4ef235f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -18,9 +18,21 @@ window.$docsify = { | |||
return content; | ||||
}); | ||||
|
||||
hook.beforeEmbed(function(tokens, next) { | ||||
// Invoked each time after parsing the Markdown file, but before embedding is applied to the token list. | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we need to change the wording, or otherwise describe what "embedding" is. I don't think that currently Docsify users have any concept of "embed", and the term "embed" is currently just an internal detail. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I doubt that given there is a whole page about embedding files in the documentation: https://docsify.js.org/#/embed-files There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh, yeah, oops. 😄 Does marked give back tokens just for the embed? Or what exactly? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "tokens" is an array of token, produced by the docsify/src/core/render/embed.js Line 103 in 66303fe
Docsify's embed mechanism operates on this token list as well. |
||||
// ... | ||||
next(tokens); | ||||
}); | ||||
|
||||
hook.afterEmbed(function(tokens, next) { | ||||
// Invoked each time after parsing the Markdown file, and after embedding is applied to the token list. | ||||
// ... | ||||
next(tokens); | ||||
}); | ||||
|
||||
hook.afterEach(function(html, next) { | ||||
// Invoked each time after the Markdown file is parsed. | ||||
// beforeEach and afterEach support asynchronous。 | ||||
// Invoked each time after the Markdown file is parsed and the result HTML is generated. | ||||
// beforeEach, beforeEmbed, afterEmbed and afterEach support asynchronous。 | ||||
// ... | ||||
// call `next(html)` when task is done. | ||||
next(html); | ||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,6 +5,8 @@ export function initLifecycle(vm) { | |
'init', | ||
'mounted', | ||
'beforeEach', | ||
'beforeEmbed', | ||
'afterEmbed', | ||
'afterEach', | ||
'doneEach', | ||
'ready', | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should list the order of hooks, otherwise people have to find this in the code right here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The updates to the "Write a Plugin" page do a better job of describing when hooks are invoked. The updates aren't live yet, but they can be viewed on the |
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -91,7 +91,10 @@ function walkFetchEmbed({ embedTokens, compile, fetch }, cb) { | |
} | ||
} | ||
|
||
export function prerenderEmbed({ compiler, raw = '', fetch }, done) { | ||
export function prerenderEmbed( | ||
{ compiler, raw = '', fetch, beforeEmbed }, | ||
done | ||
) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Another thing is, this hook would depend on Maybe it is good for now (performance boost) and if we change markdown renderer later then it would simply be a breaking change. Arguably even without this hook, it would be a breaking change because output of the strings can change. But with this hook in place, it could be that a different markdown renderer does not have a concept of tokens that can be passed to a hook plugin. Also, what if we wish to add the ability to specify markdown renderers? beforeEach and afterEach would continue to make sense, but maybe beforeEmbed would not make sense if a different markdown renderer does not pass tokens out. So the question is: do we want to add this, and be coupled to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we decide we want this, I think the way it is implemented looks good. |
||
let hit = cached[raw]; | ||
if (hit) { | ||
const copy = hit.slice(); | ||
|
@@ -100,56 +103,61 @@ export function prerenderEmbed({ compiler, raw = '', fetch }, done) { | |
} | ||
|
||
const compile = compiler._marked; | ||
let tokens = compile.lexer(raw); | ||
const embedTokens = []; | ||
const linkRE = compile.Lexer.rules.inline.link; | ||
const links = tokens.links; | ||
|
||
tokens.forEach((token, index) => { | ||
if (token.type === 'paragraph') { | ||
token.text = token.text.replace( | ||
new RegExp(linkRE.source, 'g'), | ||
(src, filename, href, title) => { | ||
const embed = compiler.compileEmbed(href, title); | ||
|
||
if (embed) { | ||
embedTokens.push({ | ||
index, | ||
embed, | ||
}); | ||
} | ||
|
||
return src; | ||
} | ||
); | ||
} | ||
}); | ||
if (!beforeEmbed) { | ||
beforeEmbed = (tokens, done) => done(tokens); | ||
} // noop | ||
beforeEmbed(compile.lexer(raw), tokens => { | ||
const embedTokens = []; | ||
const linkRE = compile.Lexer.rules.inline.link; | ||
const links = tokens.links; | ||
|
||
tokens.forEach((token, index) => { | ||
if (token.type === 'paragraph') { | ||
token.text = token.text.replace( | ||
new RegExp(linkRE.source, 'g'), | ||
(src, filename, href, title) => { | ||
const embed = compiler.compileEmbed(href, title); | ||
|
||
if (embed) { | ||
embedTokens.push({ | ||
index, | ||
embed, | ||
}); | ||
} | ||
|
||
// keep track of which tokens have been embedded so far | ||
// so that we know where to insert the embedded tokens as they | ||
// are returned | ||
const moves = []; | ||
walkFetchEmbed({ compile, embedTokens, fetch }, ({ embedToken, token }) => { | ||
if (token) { | ||
// iterate through the array of previously inserted tokens | ||
// to determine where the current embedded tokens should be inserted | ||
let index = token.index; | ||
moves.forEach(pos => { | ||
if (index > pos.start) { | ||
index += pos.length; | ||
} | ||
}); | ||
return src; | ||
} | ||
); | ||
} | ||
}); | ||
|
||
// keep track of which tokens have been embedded so far | ||
// so that we know where to insert the embedded tokens as they | ||
// are returned | ||
const moves = []; | ||
walkFetchEmbed({ compile, embedTokens, fetch }, ({ embedToken, token }) => { | ||
if (token) { | ||
// iterate through the array of previously inserted tokens | ||
// to determine where the current embedded tokens should be inserted | ||
let index = token.index; | ||
moves.forEach(pos => { | ||
if (index > pos.start) { | ||
index += pos.length; | ||
} | ||
}); | ||
|
||
merge(links, embedToken.links); | ||
merge(links, embedToken.links); | ||
|
||
tokens = tokens | ||
.slice(0, index) | ||
.concat(embedToken, tokens.slice(index + 1)); | ||
moves.push({ start: index, length: embedToken.length - 1 }); | ||
} else { | ||
cached[raw] = tokens.concat(); | ||
tokens.links = cached[raw].links = links; | ||
done(tokens); | ||
} | ||
tokens = tokens | ||
.slice(0, index) | ||
.concat(embedToken, tokens.slice(index + 1)); | ||
moves.push({ start: index, length: embedToken.length - 1 }); | ||
} else { | ||
cached[raw] = tokens.concat(); | ||
tokens.links = cached[raw].links = links; | ||
done(tokens); | ||
} | ||
}); | ||
}); | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -33,6 +33,8 @@ describe('Docsify', function() { | |
|
||
expect(hook.init).toBeInstanceOf(Function); | ||
expect(hook.beforeEach).toBeInstanceOf(Function); | ||
expect(hook.beforeEmbed).toBeInstanceOf(Function); | ||
expect(hook.afterEmbed).toBeInstanceOf(Function); | ||
expect(hook.afterEach).toBeInstanceOf(Function); | ||
expect(hook.doneEach).toBeInstanceOf(Function); | ||
expect(hook.mounted).toBeInstanceOf(Function); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sidenote, we need better hooks tests. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would you mind adding tests so that it shows how to use the hook and tokens (and makes sure it won't break later). The other tests are bad (they are lacking, obviously), but moving forward we should test new features at least. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should pass
tokens
produced from a markdown parser to a plugin hook. As @trusktr pointed out, this makes the plugin functionality dependent on a specific markdown library which IMHO is not a good idea.More importantly, I don't think we should pass "markdown before it becomes HTML" in two different formats (
string
vstokens
) to two very similar hooks:If plugins authors want to work with tokens, they can generate their own tokens by accessing Docsify's renderer directly within their plugin: