Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamically create language server specs specific to each kernel #1099

Open
lionel- opened this issue Sep 23, 2024 · 2 comments
Open

Dynamically create language server specs specific to each kernel #1099

lionel- opened this issue Sep 23, 2024 · 2 comments

Comments

@lionel-
Copy link

lionel- commented Sep 23, 2024

Elevator Pitch

Currently language server specifications can only be defined globally and all kernels have to use the same specification. Would it be possible to provide a way for kernels to generate language server specifications dynamically?

Motivation

We are developing Ark (https://github.com/posit-dev/ark), a new Jupyter kernel for R. One specificity of Ark is that it includes an LSP server that lives in the same process as the kernel. This provides the LSP with access to the current state of the workspace so that it is able to provide e.g. completions for global objects interactively defined by the user.

While our main target is the Positron IDE (https://github.com/posit-dev/positron), we would like users of Jupyter apps to be able to connect to our LSP server. The main problem we are facing is that configuration for jupyter-lsp is done via static files (https://jupyterlab-lsp.readthedocs.io/en/latest/Configuring.html) or via the LanguageServerManager feature which, IIUC, only provides the ability to interpolate from global state, not state specific to a running kernel. This is problematic for us because we need a way to determine which kernel session to connect to.

Design ideas

One approach would be to create a Jupyter request to allow clients to retrieve a language server spec that is specific to the running kernel. When the kernel gets that request, it would respond with a language server spec built dynamically and that conforms with

.

The Jupyter protocol states that unknown requests can be ignored. That makes it a bit awkward to send the request unconditionally as this would require a timeout to determine the kernel does not support the request. Instead we could advertise the capability through extended fields in the kernel-info response?

We would also need the ability to run one LSP per notebook/kernel, which is currently not the case (there is some discussion about this in #642).

(It would help if a language server spec could be set for a TCP connection. But that's not necessary as we could work around that and create a CLI util to relay between stdin/stdout and a TCP connection to our server.)

@bollwyvl
Copy link
Collaborator

At this stage, with jupyter_lsp as a hard dependency of jupyterlab, it is a bit tricky to consider drastically changing the API surface on the server. It's already quite expensive to not make it horrible to find servers on $PATH that we know are installed, but works well enough to get a reasonably stable experience.

Humorously, the REST endpoint is backed by an evented system which could notify when new specs appear, but would require another websocket.

Proposed in several places (and implemented in some other, proprietary, ones) is putting LSP JSON messages on the jupyter kernel messaging protocol, either as comm (my preference, as it already has an announcement mechanism, but violently disagreed-with) or somehow on the control channel, inventing new messages.

In this scenario, the jupyter_server wouldn't even know LSP things were happening, per se, and might not even need jupyter-lsp.

On the client, the language server connection manager would be able to register a provider of specs, with the ability to notify that new ones are available, and a priority/rank system would make these "more attractive" to be used... though we really multiple servers per document per #437 would need to be considered in this case, as well.

#278 (now terribly bitrotten) suggested a single kernel with comm, having each kernel be able to provide this would allow for multi-machine use cases (e.g. kernel-gateway) or even entirely within the browser, a la jupyterlite.

This would represent a fair amount of work for kernel implementers, of course, but likely the only way for some of the use cases (e.g. in-memory WASM file systems).

@lionel-
Copy link
Author

lionel- commented Sep 26, 2024

Interesting stuff!

Proposed in several places (and implemented in some other, proprietary, ones) is putting LSP JSON messages on the jupyter kernel messaging protocol, either as comm (my preference, as it already has an announcement mechanism, but violently disagreed-with) or somehow on the control channel, inventing new messages.

Since incoming comm messages are queued on Shell, this would prevent receiving LSP requests while the kernel is busy (e.g. fitting a model). It does seem like Control is a better place for async messaging. We're thinking of using this socket for some of our comm messages for this reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants