Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remodel service as bidirectional stream #177

Closed
ebkalderon opened this issue Apr 10, 2020 · 4 comments · Fixed by #313
Closed

Remodel service as bidirectional stream #177

ebkalderon opened this issue Apr 10, 2020 · 4 comments · Fixed by #313
Labels
enhancement New feature or request

Comments

@ebkalderon
Copy link
Owner

Problem statement

Currently, LspService adopts a unary request/response model where <LspService as tower::Service>::call() accepts a single Incoming message and sends an Option<String> response back.

The basic service definition looks like this:

impl tower::Service<Incoming> for LspService {
    type Response = Option<String>;
    type Error = ExitedError;
    type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;

    fn call(&mut self, message: Incoming) -> Self::Future { ... }
}

This model is a poor fit for a Language Server Protocol implementation for several reasons:

  1. LSP is bidirectional, where the client and server can both send requests and responses to each other. The unary request/response model imposes a strong client-to-server ordering on the LspService when there really shouldn't be one, leading to hacks such as MessageStream and wrapping each outgoing message in an Option to get the behavior we want.
  2. Client-to-server communication cannot be polled independently of server-to-client communication, since they are both being funneled through the LspService::call() method. This method must be called repeatedly in order to drive both independent streams of communication, which feels awkward to use and write tests for.

Proposal

For these reasons, I believe it would be better to adopt a bidirectional model structured like this:

impl tower::Service<MessageStream> for LspService {
    type Response = MessageStream;
    type Error = ExitedError;
    type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;

    fn call(&mut self, stream: MessageStream) -> Self::Future { ... }
}

With the above model, the call() method is only ever called once by the client, given a stream of incoming messages, producing a future which either resolves to Ok(MessageStream) if all is well, or an Err(ExitedError) if the language server has already shut down. Both message streams can then be polled independently from each other, in either direction, with no strict ordering. When the language server shuts down, the outgoing message stream will terminate and any subsequent attempt to call LspService::call() to produce a new one will immediately return Err(ExitedError).

User impact

This change would have a minor impact on users which rely on tower_lsp::Server to route communication over stdio. It will result in a slightly different initialization process:

Before

let (service, messages) = LspService::new(Backend::default());
Server::new(stdin, stdout)
    .interleave(messages)
    .serve(service)
    .await;

After

let service = LspService::new(Backend::default());
Server::new(stdin, stdout)
    .serve(service)
    .await;

This change would have a greater effect on users wrapping LspService in tower middleware in their own projects, since the service request and response types would have changed and would need to be handled differently.

@ebkalderon ebkalderon added the enhancement New feature or request label Apr 10, 2020
@ebkalderon
Copy link
Owner Author

There may need to be some investigation as to whether most available tower middleware will still be useful after this change, though. Many of these seem to operate on unary requests and responses, meaning they would no longer be useful here once this change to bidirectional request/response streaming is made (apart from perhaps retry and timeout).

@LucioFranco
Copy link

@ebkalderon this seems reasonable! This is basically what http based tower services do, where we have a Body type that is basically a Stream. Usuaully, the impact middleware have is on making the connection. We can't reliably retry say a streaming request because we don't know at which state we drained the stream. But we can retry on things like we failed to make the initial request etc. Or the whole request failed with this status retry.

ebkalderon added a commit that referenced this issue Sep 24, 2020
This abstraction is a little bit cleaner since it hides the concept of
invalid requests from the user. Additionally, it simplifies the logic in
the `Service::call` trait method implementation on `LspService`.

This commit is an early step towards refactoring the service model:

#177
ebkalderon added a commit that referenced this issue Sep 24, 2020
This abstraction is a little bit cleaner since it hides the concept of
invalid requests from the user. Additionally, it simplifies the logic in
the `Service::call` trait method implementation on `LspService`.

This commit is an early step towards refactoring the service model:

#177
ebkalderon added a commit that referenced this issue Sep 24, 2020
This abstraction is a little bit cleaner since it hides the concept of
invalid requests from the user. Additionally, it simplifies the logic in
the `Service::call` trait method implementation on `LspService`.

This commit is an early step towards refactoring the service model:

#177
@ebkalderon
Copy link
Owner Author

I wonder whether this ticket should be closed due to #313? I ended up not taking this approach with my refactoring, since it would render the vast majority of tower middleware useless (most assume a unary request/response model). Instead, I chose the following approach:

  1. LspService now implements Service<jsonrpc::Request, Response = Option<jsonrpc::Response>>. It only handles client-to-server requests and notifications.
  2. Client also implements Service<jsonrpc::Request, Response = Option<jsonrpc:Response>>. It only handles server-to-client requests and notifications.
  3. When constructing an LspService, it returns a (Self, ClientSocket). The ClientSocket is a kind of loopback channel which acts as the opposite half of Client, emitting a stream of requests and accepting responses. The only purpose of ClientSocket is to be blindly passed into Server::new(stdin, stdout, client_socket). It does not implement the Service trait for obvious reasons, but there's no need for it to, since Client now implements Service.

In short, LspService is a unary request/response Service that represents the LSP server, and Client is a unary request/response Service that represents the LSP client. Both behave pretty much as you'd expect under the tower model. 👍

Of course, there's no support for request batching under this data model because the current version of the Language Server Protocol does not support it (microsoft/language-server-protocol#988). This is despite LSP being based, at least partially, on JSON-RPC 2.0, which does have request batching built into the specification.

@ebkalderon
Copy link
Owner Author

After giving it some more thought, I feel that the new design introduced by #313 should sufficiently resolve most of the concerns outlined in this ticket, while keeping the unary design. Further improvements could potentially be made on the Response side, e.g. finding another way to model an empty Response without resorting to Option<_>, or perhaps improving the ClientSocket abstraction. If concerns around the service model ever crop up again, we can always reopen this ticket.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants