-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move LSP's tokio::Runtime
and Client
to RMain
#193
Conversation
@@ -496,35 +497,51 @@ impl Backend { | |||
} | |||
} | |||
|
|||
#[tokio::main] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No more attribute because we manage this manually now by creating the Runtime
elsewhere and calling block_on()
ourselves
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, this attribute seems a bit too magic/mysterious when you're not familiar with tokio
|
||
#[cfg(feature = "runtime-agnostic")] | ||
let (read, write) = (read.compat(), write.compat_write()); | ||
pub fn build_lsp_service() -> (LspService<Backend>, ClientSocket, Client) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This encapsulates the "trick" for getting access to the Client
in a way that can be sent along to start_r()
@@ -85,7 +85,7 @@ impl<'a> DiagnosticContext<'a> { | |||
} | |||
} | |||
|
|||
pub async fn enqueue_diagnostics(backend: Backend, uri: Url, version: i32) { | |||
pub fn enqueue_diagnostics(backend: Backend, uri: Url, version: i32) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happened to realize along the way that this doesn't need to be async
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was also thinking we could remove the unnecessary block_on()
at some point since they complicate call stacks with tokio stuff.
let rt = Runtime::new().unwrap(); | ||
let globals = R_CALLBACK_GLOBALS.as_ref().unwrap(); | ||
let main = RMain::get(); | ||
let runtime = main.get_lsp_runtime(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now reusing existing runtime!
if let Err(err) = result { | ||
// In the unlikely event that the LSP `client` hasn't been | ||
// initialized yet, or has shut down, we probably don't want | ||
// to crash ark. | ||
log::error!("Failed to open '{uri}' due to {err:?}"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unlikely, but better than an unwrap()
IMO
// Transfer field ownership to the thread | ||
let runtime = self.runtime.take().unwrap(); | ||
let service = self.service.take().unwrap(); | ||
let socket = self.socket.take().unwrap(); | ||
|
||
spawn!("ark-lsp", move || { | ||
backend::start_lsp(tcp_address, conn_init_tx) | ||
backend::start_lsp(runtime, service, socket, tcp_address, conn_init_tx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this Option
+ take()
trick for one-time ownership transfer to threads. Useful for things that can't be cloned
// LSP tokio runtime used to spawn LSP tasks on the executor and the | ||
// corresponding client used to send LSP requests to the frontend. | ||
// Used by R callbacks, like `ps_editor()` for `utils::file.edit()`. | ||
lsp_runtime: Arc<Runtime>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be called tokio_runtime
? I assume we'll share with entities other than the LSP if tokio becomes needed in other parts of ark?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to rename this in the future, but right now I'm not 100% sure if we will use the same tokio runtime everywhere or not. For example, in help_proxy.rs
we have another separate runtime already
// The help proxy main entry point.
#[tokio::main]
async fn task(target_port: u16) -> anyhow::Result<()> {
// Create the help proxy.
let help_proxy = HelpProxy::new(target_port)?;
// Set the help proxy port.
unsafe { browser::PORT = help_proxy.source_port };
// Run the help proxy.
Ok(help_proxy.run().await?)
}
b7c26f7
to
cf346b5
Compare
Joint work with @lionel-
Addresses posit-dev/positron#1885
Addresses posit-dev/positron#1956
We now use 1
tokio::Runtime
for LSP tasks. The tasks could either come from the main LSP thread or from an R callback, likefile.edit()
(which calls the LSPshow_document()
method).We now call
spawn()
rather thanblock_on()
inps_editor()
. This allows R to return immediately fromps_editor()
, preventing the deadlock that is described in R exists unexpectedly when calling file.edit() many times positron#1885 (comment). We aren't worried about the fact thatfile.edit()
could technically return before the file has been opened. RStudio similarly just sends off an event to open the file and then immediately returns, and it hasn't proven to be an issue there. We also weren't waiting on the file to be open before, the future we were blocking on was just confirmation that the show document request actually gets sent, not that it had been received and acted upon.I've finally been able to remove all reliance on the ugly global variables we had floating around.
Runtime
is created in ark'sstart_kernel()
and wrapped in anArc
so it can be sent to the LSP thread (the runtime can't be cloned). It is also sentstart_r()
to be a part ofRMain
.Runtime
methods are all immutable, so it should not require aMutex
.Client
was initially trickier, but I figured out that you can build theLspService
at any time, and that is what gives us access to theClient
, so I extracted that build call out intobuild_lsp_service()
and we call that fromstart_kernel()
now too. The returned service and socket are passed along to the LSP handler to actually start the LSP, but the client is instead routed tostart_r()
to be a part ofRMain
. One thing to note is that we need to be somewhat careful that the LSP actually gets initialized before sending requests through theClient
. i.e. the LSPinitialize()
call needs to get called first. I think this is unlikely to be problematic in practice?Proof of some things working on the windows vm that didn't work before
Screen.Recording.2024-01-03.at.3.46.09.PM.mov
Screen.Recording.2024-01-03.at.3.46.49.PM.mov