Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support shrinking of receive buffers of a stream #21

Merged
merged 2 commits into from
Nov 29, 2015
Merged

Support shrinking of receive buffers of a stream #21

merged 2 commits into from
Nov 29, 2015

Conversation

armon
Copy link
Member

@armon armon commented Nov 28, 2015

This PR adds the Shrink method that allows the receive buffers of a Stream to be truncated to zero. This can be useful when connection pooling may keep idle Streams around, as they may have grown their internal buffers to be fairly large (256KB max by default).

/cc: @slackpad

if s.recvBuf == nil {
// Allocate the receive buffer just-in-time to fit the full data frame.
// This way we can read in the whole packet without further allocations.
s.recvBuf = bytes.NewBuffer(make([]byte, 0, length))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting that Go has a hard-coded 32k here the way we were doing it before.

@slackpad
Copy link
Contributor

LGTM

armon added a commit that referenced this pull request Nov 29, 2015
Support shrinking of receive buffers of a stream
@armon armon merged commit df94978 into master Nov 29, 2015
@dadgar dadgar deleted the f-shrink branch November 30, 2015 19:35
@jbenet
Copy link

jbenet commented Dec 1, 2015

@armon wonder if pool size + time based auto-shrink would be a good idea

if len(streams) > k && (now - s.lastRecvd) > timeout { 
  s.Shrink() 
}

@armon
Copy link
Member Author

armon commented Dec 2, 2015

@jbenet it's hard to do automatically at the yamux layer as we have no context from the application. We've used this in the Consul and Nomad connection pools however where we do have more context!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants