You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When users open a file, we try to restore the view state of last time when users close this file. It means after the file is opened, users are usually not watching the top of the document. When the file is large, users will see white code first then after a few hundred ms they see the colorful content. This is because our tokenization is not fast enough.
Even though to get a correct tokenization state, we have to tokenize the document from the first line to the last, we can try to guess the starting state of tokenization when viewing a random view port. If our guess is accurate enough, then users can have a better experience than current one, even if sometimes the colors go wrong for a bit of time.
There are several ways to guess
Go backwards 100 (or some other magic number) lines, start the tokenization from there.
Guess by indentation. Say the first line of the view port indentation level is 3, we can go backwards to find lines whose indent level is 2, 1, 0. And then tokenize these three lines and the view port.
Go backwards a few lines to see if we are in a block comment or string, if not, tokenize directly from the first line of viewport. This can work together with option 1.
The verify if we are good at guessing, we can run full tokenization against files and compare that with our guessing algorithm. We don't necessarily need the tokenization state to be the same, as long as the colors are correct, then it's a good one. For example, we can run TypeScript tokenization against VSCode, TypeScript, Angular, etc; Ruby for Rails, Jekyll, CocoaPods; etc etc.
The text was updated successfully, but these errors were encountered:
When users open a file, we try to restore the view state of last time when users close this file. It means after the file is opened, users are usually not watching the top of the document. When the file is large, users will see white code first then after a few hundred ms they see the colorful content. This is because our tokenization is not fast enough.
Even though to get a correct tokenization state, we have to tokenize the document from the first line to the last, we can try to guess the starting state of tokenization when viewing a random view port. If our guess is accurate enough, then users can have a better experience than current one, even if sometimes the colors go wrong for a bit of time.
There are several ways to guess
2
,1
,0
. And then tokenize these three lines and the view port.The verify if we are good at guessing, we can run full tokenization against files and compare that with our guessing algorithm. We don't necessarily need the tokenization state to be the same, as long as the colors are correct, then it's a good one. For example, we can run TypeScript tokenization against VSCode, TypeScript, Angular, etc; Ruby for Rails, Jekyll, CocoaPods; etc etc.
The text was updated successfully, but these errors were encountered: