-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redesign core architecture, support custom server requests #313
Conversation
5c5feb1
to
497ad29
Compare
This comment was marked as resolved.
This comment was marked as resolved.
ed280bc
to
534d770
Compare
Ahh, looks like 7920a99 cannot be implemented right now on EDIT: Actually, I realize now that I should be able to retrieve the source and then downcast that into |
0850ebc
to
d5bb6f9
Compare
Okay, I'll see if I can implement a change for that but your solution seems reasonable for now. |
These changes sound great! I only skimmed through the source diffs so far but I think all the proposed changes make sense given previous discussions around the various issues. |
impl LanguageServer for Foo { ... }
impl Foo {
async fn request(&self) -> Result<...> { ... }
async fn request_params(&self, params: ...) -> Result<...> { ... }
async fn notification(&self) { ... }
async fn notification_params(&self, params: ...) { ... }
}
let (service, socket) = LspService::build(|client| Foo { ... }) // Wow! :heart_eyes:
.method("custom/request", Foo::request)
.method("custom/requestParams", Foo::request_params)
.method("custom/notification", Foo::notification)
.method("custom/notificationParams", Foo::notification_params)
.finish(); in the demo above, what is type of params, |
Please ignore me, i don't see the doc. |
I tried this branch out on the code I was working on that I mentioned back in #284 (comment). As far as I can tell, this seems like it would have fixed the issue I had. So that's cool! |
This new data model is significantly more generic than the old one, with straightforward `Request` and `Response` types capable of representing any valid JSON-RPC 2.0 message, rather than artificially restricting the scope only to LSP request types. This opens the door to supporting user-defined custom client-to-server requests. These slimmer data types should also consume less memory per message too.
It seems that even with this rewrite of `Server` there are still concerns around stalls occurring when `max_concurrency` long-running server tasks are spawned, causing backpressure. This stops the `read_input` task from advancing forward, thereby preventing any `$/cancelRequest` messages sent by the client from being received. Unlike the code present in the `master` branch today, this situation heals itself over time as the buffered futures eventually resolve and return responses, freeing `read_input` again to continue advancing. However, if any of these futures happen to be waiting to receive inbound responses from the client, these futures will block forever and cause the transport to stall once again. A reproducible example of this issue can be created by adding this line to the `execute_command()` handler in `examples/stdio.rs`, just before the `self.client.log_message` call: ``` tokio::time::sleep(std::time::Duration::from_secs(10)).await; ``` and then in your LSP client of choice, execute the `dummy.do_something` command on the server over and over again, as quickly as you can. The server should slowly grow unresponsive and eventually stall completely as the client reports that the server has timed out, usually after 5000ms for most language clients. It would be nice if we could set the transport concurrency level to something very, very high (like 100+, akin to the `SETTINGS_MAX_CONCURRENT_STREAMS` setting in HTTP/2) and then have a separate concurrency level for processing server requests specifically (something customizable by the user and defaulting to much lower value, like 4). I believe an approach like this would fix the issue for the vast majority of users. This can be achieved by raising the buffer size of the `server_tasks_{tx,rx}` channel from 0 to 100, thereby allowing up to 100 server requests to be queued for processing at a time, but only executed in batches of `max_concurrency`.
Okay, I think this pull request is largely ready to merge! With some light integration testing out of the way and some external feedback addressed, it seems that the time has come to pull the trigger on this. It may take a bit to prepare the changelog entries for the next release, given all the changes, and it's very early in the morning where I am, so pushing the green button will have to wait another day or two. 😴 |
357e062
to
29f3249
Compare
84f704d
to
1af51d3
Compare
The vast majority of `Method` implementers implement `Copy + Clone`, except for "$/cancelRequest" which implements `Clone` only (due to needing to clone `Arc<Pending>` on every call). As such, it should be possible to place this bound on `handler: F` and avoid multiple ownership altogether.
871f552
to
cb0c0b7
Compare
In this case, we take "callback" to refer to some function or method passed as an argument to be executed by the `Router<S>`, while "handler" refers to the object wrapping said callback that gets executed on every `<Router<S> as Service<Request>>::call`.
c217baf
to
c5c6752
Compare
c5c6752
to
446a0d8
Compare
I've chosen to rename |
Looks like this is ready to merge. Let's go! 🎉 |
Could you please publish a new version? Currently i used git repo as my dependency which maybe broken when you add some breaking change. |
Yup, I'm waiting for PR #321 to finish in CI as I type this. 😄 |
Added
LspService::build().method(...).finish()
.Client
now implementsService<Request, Response = Option<Response>>
, just likeLspService
.concurrency_level
setting onServer
.jsonrpc::Request
object with convenient builder interface..result()
/.error()
and.is_ok()
/.is_error()
methods tojsonrpc::Response
.From
implementations forjsonrpc::Id
.Loopback
trait to allow for mock service/client to be used withServer
.Changed
LspService
now implementsService<Request, Response = Option<Response>>
.LspService::new()
now returns aClientSocket
instead of aMessageStream
.Server::new()
now requires a thirdClientSocket
argument instead of using.interleave()
.jsonrpc::Response::{ok, error}
tojsonrpc::Response::{from_ok, from_error}
.Client::send_custom_{request,notification}
toClient::send_{request,notification}
.Fixed
Server
occasionally stalling by processing client responses separately from client-to-server requests.-32600
(invalid request) if incoming JSON parses, but isn't a JSON-RPC request or response.Removed
Server::interleave()
in favor of a thirdsocket
argument toServer::new()
.jsonrpc::{ClientRequest, Incoming, Outgoing, ServerRequest}
.MessageStream
.This redesign has been a very long time coming, but it's finally here! 🎉 This pull request shouldn't affect too much code downstream, except the sacred initialization incantation is slightly different:
Most importantly, support has been added for custom server requests via the following API:
In the process, this pull request also lands a slew of architectural improvements (under the hood and otherwise) and fixes a long-standing critical concurrency bug.
Client
now finally implementstower::Service
, making it possible to fire off requests and notifications at the client "the Tower way". Thejsonrpc
module has seen much consolidation too, with a new and improvedRequest
type replacingMessage
,ClientRequest
, andServerRequest
. Finally, as part of the ground-up rewrite ofServer
, the server's concurrency level can now be customized by the user, if desired.I didn't originally plan on all of these improvements landing in a single PR, but they nonetheless arose as a consequence of rewriting the internal JSON-RPC router to support custom requests.
Closes #226.
Closes #256.
Closes #177.
Partially addresses #284.
I'd love any feedback on this before merging, @silvanshade @Timmmm @IWANABETHATGUY @samscott89 @lgalabru! ❤️
Sorry for the noisy pings, but I'd appreciate any willing real-world testers.