Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream Batching #524

Closed
kiro opened this issue Feb 3, 2016 · 9 comments
Closed

Stream Batching #524

kiro opened this issue Feb 3, 2016 · 9 comments
Labels
Type: Feature New features or improvements in behavior Type: Performance Performance improvements (CPU, network, memory, etc)

Comments

@kiro
Copy link

kiro commented Feb 3, 2016

Hi, we noticed 6x throughput improvement if we stream 100 byte messages in a batch of 50 (e.g. one response message with repeated field) rather than streaming them one by one. So are there plans to add support for batching in the streaming implementation?

Thanks,
Kiril

@iamqizhao
Copy link
Contributor

This will be tricky because batching might enlarge latency. We have some minimal deliberate batching support in the code already. But how to enhance it to maximize throughput without hurting latency is complex and need more time. Performance would be one of our main focus next quarter.

@xiang90
Copy link
Contributor

xiang90 commented Feb 3, 2016

@iamqizhao Are we open to add a Flush() method to stream to enable user controlled batching?

@iamqizhao
Copy link
Contributor

This would be tricky on the interaction with Flush decision on the transport layer.

@bradfitz
Copy link
Contributor

bradfitz commented Feb 5, 2016

@iamqizhao, Go's http2 server does this aggregation automatically. It has a write scheduler that only flushes only when a packet is full or there's nothing else to send. So if we switch, I think this should get a lot easier to control. (or ideally: not control, if it just works)

@iamqizhao
Copy link
Contributor

@bradfitz , as I mentioned, the current grpc transport does this automatically too (may be different from what http2 does). The problem is that whether Flush knob should be exposed to users.

@sailat
Copy link

sailat commented Mar 3, 2017

We are working on batching requests this quarter to improve perf. Will update this thread with available date soon.

@hsaliak hsaliak added enhancement Type: Performance Performance improvements (CPU, network, memory, etc) labels May 17, 2017
@hsaliak
Copy link

hsaliak commented May 17, 2017

@dfawley assigning to you to triage and make a decision on exposing the Flush knob to users.

@dfawley
Copy link
Member

dfawley commented Jun 30, 2017

We discussed this today at some length. Without major API changes or hacks (i.e. passing a magic wrapper type to stream.SendMsg), there isn't much we can do here. It does sound like a good idea, though, so I'll leave it open.

@dfawley dfawley removed their assignment Jun 30, 2017
@dfawley dfawley added Type: Feature New features or improvements in behavior and removed Type: Enhancement labels Aug 24, 2017
@MakMukhi
Copy link
Contributor

We changed out write mechanism to use a dedicated loop to write which essentially solves the batching problem.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 26, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Type: Feature New features or improvements in behavior Type: Performance Performance improvements (CPU, network, memory, etc)
Projects
None yet
Development

No branches or pull requests

8 participants