Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set completions from the async thread #2563

Merged
merged 1 commit into from
Dec 3, 2024
Merged

Set completions from the async thread #2563

merged 1 commit into from
Dec 3, 2024

Conversation

jwortmann
Copy link
Member

According to sublimehq/sublime_text#6249 (comment) this should prevent lag/freezing if there is an excessive amount of completion items.

@rchl
Copy link
Member

rchl commented Nov 27, 2024

Will run with it for a while before merging.

@predragnikolic
Copy link
Member

predragnikolic commented Nov 29, 2024

Here is how I tested this.
I cloned the typescript-language-server.
And I have changed this code:

    async completion(params: lsp.CompletionParams, token?: lsp.CancellationToken): Promise<lsp.CompletionList | null> {
        
        const completions = asCompletionItems(entries, this.completionDataCache, filepath, params.position, document, this.tsClient, completionOptions, this.features, completionContext);
+        // added 100000 on top of typescript the returned typescript completion items, to mock a really huge response
+        for (let i = 0; i < 100000; i++) { 
+          completions.push({
+            ...completions[0],
+            label: `Item ${i}`
+          })
+        }
        return lsp.CompletionList.create(completions, isIncomplete);
    }

Then I've built the server and started testing with:

  • ST build 4183
  • ST build 4184
  • ST build 4184 with this PR.

The conclusion is that there is still a noticeable delay when typing.

4183.mp4
4184.mp4
4184-with-PR-2563.mp4

@predragnikolic
Copy link
Member

predragnikolic commented Nov 29, 2024

I also tested the plugin that Rafal created to reproduce the slow completion issue ->
sublimehq/sublime_text#6249
There seems to be an noticeable improvement in a good way, but I can still notice lag.

@predragnikolic
Copy link
Member

predragnikolic commented Nov 30, 2024

Here is one theory.

ST looks like it can deal with 1_000_000 completions, but when a language server returns 10mb of data as a string, and when trying to load that as objects in python, than the lag appears.

This line will not be the bottleneck:

body = reader.read(int(headers.get("Content-Length")))

but this one will:

return orjson.loads(message)

orjson is fast, but in my discoveries the bottleneck on why the lag exists.

I think the issue is similar to this article "Python’s memory-inefficient JSON loading"
https://pythonspeed.com/articles/json-memory-streaming/

EDIT:
I meant to say return orjson.loads(message)

@predragnikolic
Copy link
Member

Here is the minimal plugin code to reproduce the issue.
Advent of Code - with a bit of an LSP issue.zip

@jwortmann
Copy link
Member Author

jwortmann commented Dec 1, 2024

orjson is fast, but in my discoveries the bottleneck on why the lag exists.

But all of that should happen in a background thread, so it should not block the UI rendering. The reader loop is created in a separate thread:

self._reader_thread = threading.Thread(target=self._read_loop, name=f'{name}-reader')

Iirc Python doesn't support "real" multithreading, but still the constructor of ProcessTransport is called from Sublime's async thread:

transport = create_transport(transport_config, transport_cwd, session)

which is part of
def start_async(self, config: ClientConfig, initiating_view: sublime.View) -> None:


In your code example the orjson.loads is in on_query_completions which runs on the UI thread. So in that case the lag is expected.

A corresponding modification of your code which would do the JSON decoding on the async thread would be like this:

from __future__ import annotations
import sublime_plugin
import sublime
import orjson
from typing import TypedDict
from .fake_data import data # 18mb of data as byte string


class CompletionListener(sublime_plugin.ViewEventListener):
    def on_query_completions(self, prefix, locations):
        completion_list = sublime.CompletionList()
        sublime.set_timeout_async(lambda: self._process_completions_async(completion_list))
        return completion_list

    def _process_completions_async(self, completion_list: sublime.CompletionList) -> None:
        lsp_completion_list: LspCompletionList = orjson.loads(data) # make this line fast :)

        completion_list.set_completions(
            [c['label'] for c in lsp_completion_list['items']],
            flags=sublime.AutoCompleteFlags.INHIBIT_WORD_COMPLETIONS | sublime.AutoCompleteFlags.INHIBIT_EXPLICIT_COMPLETIONS)


class LspCompletionList(TypedDict):
    isIncomplete: bool
    items: list['LspCompletionItem']

class LspCompletionItem(TypedDict):
    label: str

With this I can still see lag on ST 4180. You could check whether or not it works lag-free on ST 4184. If not, this could be used as a minimum example to report back in the ST issue tracker.

@rchl
Copy link
Member

rchl commented Dec 1, 2024

It was discussed in Discord also where python's GIL was mentioned.

There is a good article about GIL at https://realpython.com/python-gil/ . Based on that it seems like a cpu-bound thread basically has no performance benefits in Python.

There would still be benefits in not blocking the IO but I guess the typing lag can suffer from either.

@jwortmann
Copy link
Member Author

My assumption was (is) that the UI thread and the async thread from ST are created in C++. So they are "real" threads and don't block each other.

@rchl
Copy link
Member

rchl commented Dec 3, 2024

Not sure. Those still have to run the same python environment so I would think that GIL would still apply.

Also, if async thread is blocked then the *_async event handles can't fire and that can also cause some form of blocking (not in a complete UI block way but certain actions can become unresponsive).

@rchl
Copy link
Member

rchl commented Dec 3, 2024

This looks good, I would say.
Any potential improvements should be handled in a separate issue.

@rchl rchl merged commit 9e30185 into main Dec 3, 2024
8 checks passed
@rchl rchl deleted the set-completions-async branch December 3, 2024 07:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants