Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement non-blocking stream server handler #503

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

naoh87
Copy link
Contributor

@naoh87 naoh87 commented Feb 13, 2022

What this PR do

  • Eliminate blocking from stream server runtime.
  • align unary server runtime I/F
  • fix flaky ServerSuit

ClientStreaming Benchmark

method implement

request.compile.last.map(_.fold(HelloReply())(r => HelloReply(r.request)))

Request 100 messages with ghq --cpu=4 -z 60s --connections=5 ...

#503

actual cpu used: 1.0core
Summary:
  Count:	39264
  Total:	60.00 s
  Slowest:	385.42 ms
  Fastest:	2.80 ms
  Average:	74.86 ms
  Requests/sec:	654.39

main

actual cpu used: 1.0core
Summary:
  Count:	11280
  Total:	60.00 s
  Slowest:	1.09 s
  Fastest:	7.85 ms
  Average:	265.53 ms
  Requests/sec:	188.00

@naoh87 naoh87 force-pushed the light_server_stream_runtime branch 7 times, most recently from f0e2687 to 2be5402 Compare February 17, 2022 21:38
@naoh87 naoh87 force-pushed the light_server_stream_runtime branch from 2be5402 to e5e1b57 Compare February 21, 2022 21:58
@naoh87 naoh87 mentioned this pull request Feb 27, 2022
@ahjohannessen
Copy link
Collaborator

@naoh87 I have been busy with other stuff and have not forgotten about this PR. I'll look more closely into it when I find the time.

@naoh87 naoh87 force-pushed the light_server_stream_runtime branch from e5e1b57 to 876b4ab Compare March 15, 2022 19:18
@naoh87 naoh87 force-pushed the light_server_stream_runtime branch from 876b4ab to 32595a3 Compare March 15, 2022 19:19
@ahjohannessen
Copy link
Collaborator

@naoh87 I have not forgotten this, however upcoming work with queues in cats-effect 3.4.x might give us better building blocks.

One question, does your work here take into account back pressure? For instance, a server can't keep up with a client streaming too much.

@@ -73,6 +73,12 @@ private[server] final class Fs2ServerCall[Request, Response](
dispatcher
)

def requestOnPull[F[_]]: Pipe[F, Request, Request] =
_.mapChunks { chunk =>
call.request(chunk.size)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose, call.request might be impure. Wouldn't it be better to delay the effect like this:

  import cats.syntax.functor._

  def requestOnPull[F[_]](implicit F: Sync[F]): Pipe[F, Request, Request] =
    _.chunks.flatMap(chunk =>
      Stream.evalUnChunk(F.delay(call.request(chunk.size)).as(chunk))
    )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good. thank you.

call <- Fs2ServerCall.setup(opt, call)
_ <- call.request(1) // prefetch size
channel <- OneShotChannel.empty[Request]
cancel <- start(call, impl(channel.stream.through(call.requestOnPull), headers))
Copy link
Contributor Author

@naoh87 naoh87 May 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahjohannessen back pressure is controlled from here of call.requsetOnPull .
Server requests next messages from the client each stream chunk is pulled.
Prefetch and buffer size is declarabled by 2 lines ago. _ <- call.request(1)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Server to client message back pressure implementation is still missing as main branch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this server to client back pressure implementation may cause performance issue.

grpc-java says free to ignore this and main branch does.
https://github.com/grpc/grpc-java/blob/v1.46.0/api/src/main/java/io/grpc/ServerCall.java#L100

I think this feature should be opt-in.

@naoh87
Copy link
Contributor Author

naoh87 commented May 24, 2022

I think new cats-effect 3.4.x queue will now work as good as this PR, because new CE3 queue is implemented as MPMC, but we just need SPSC.
https://github.com/JCTools/JCTools#jctools

and also new implementation does not reduce Dispatcher[F] cost at onMessage.

@naoh87 naoh87 force-pushed the light_server_stream_runtime branch from 460bdcf to 6b2e2db Compare May 24, 2022 21:27
@ahjohannessen
Copy link
Collaborator

ahjohannessen commented May 25, 2022

I think new cats-effect 3.4.x queue will now work as good as this PR, because new CE3 queue is implemented as MPMC, >but we just need SPSC.
https://github.com/JCTools/JCTools#jctools

and also new implementation does not reduce Dispatcher[F] cost at onMessage.

Ok. I think we should try to use as much code from cats-effect/fs2 where possible. That reduces the amount of code we have to maintain - I am thinking mostly about OneShotChannel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants