-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set completions from the async thread #2563
Conversation
Will run with it for a while before merging. |
Here is how I tested this. async completion(params: lsp.CompletionParams, token?: lsp.CancellationToken): Promise<lsp.CompletionList | null> {
const completions = asCompletionItems(entries, this.completionDataCache, filepath, params.position, document, this.tsClient, completionOptions, this.features, completionContext);
+ // added 100000 on top of typescript the returned typescript completion items, to mock a really huge response
+ for (let i = 0; i < 100000; i++) {
+ completions.push({
+ ...completions[0],
+ label: `Item ${i}`
+ })
+ }
return lsp.CompletionList.create(completions, isIncomplete);
} Then I've built the server and started testing with:
The conclusion is that there is still a noticeable delay when typing. 4183.mp44184.mp44184-with-PR-2563.mp4 |
I also tested the plugin that Rafal created to reproduce the slow completion issue -> |
Here is one theory. ST looks like it can deal with 1_000_000 completions, but when a language server returns 10mb of data as a string, and when trying to load that as objects in python, than the lag appears. This line will not be the bottleneck:
but this one will:
I think the issue is similar to this article "Python’s memory-inefficient JSON loading" EDIT: |
Here is the minimal plugin code to reproduce the issue. |
But all of that should happen in a background thread, so it should not block the UI rendering. The reader loop is created in a separate thread: Line 116 in 9be040d
Iirc Python doesn't support "real" multithreading, but still the constructor of Line 281 in 9be040d
which is part of Line 246 in 9be040d
In your code example the A corresponding modification of your code which would do the JSON decoding on the async thread would be like this: from __future__ import annotations
import sublime_plugin
import sublime
import orjson
from typing import TypedDict
from .fake_data import data # 18mb of data as byte string
class CompletionListener(sublime_plugin.ViewEventListener):
def on_query_completions(self, prefix, locations):
completion_list = sublime.CompletionList()
sublime.set_timeout_async(lambda: self._process_completions_async(completion_list))
return completion_list
def _process_completions_async(self, completion_list: sublime.CompletionList) -> None:
lsp_completion_list: LspCompletionList = orjson.loads(data) # make this line fast :)
completion_list.set_completions(
[c['label'] for c in lsp_completion_list['items']],
flags=sublime.AutoCompleteFlags.INHIBIT_WORD_COMPLETIONS | sublime.AutoCompleteFlags.INHIBIT_EXPLICIT_COMPLETIONS)
class LspCompletionList(TypedDict):
isIncomplete: bool
items: list['LspCompletionItem']
class LspCompletionItem(TypedDict):
label: str With this I can still see lag on ST 4180. You could check whether or not it works lag-free on ST 4184. If not, this could be used as a minimum example to report back in the ST issue tracker. |
It was discussed in Discord also where python's GIL was mentioned. There is a good article about GIL at https://realpython.com/python-gil/ . Based on that it seems like a cpu-bound thread basically has no performance benefits in Python. There would still be benefits in not blocking the IO but I guess the typing lag can suffer from either. |
My assumption was (is) that the UI thread and the async thread from ST are created in C++. So they are "real" threads and don't block each other. |
Not sure. Those still have to run the same python environment so I would think that GIL would still apply. Also, if async thread is blocked then the |
This looks good, I would say. |
According to sublimehq/sublime_text#6249 (comment) this should prevent lag/freezing if there is an excessive amount of completion items.