Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rpc performance degradation #216

Closed
ugorji opened this issue Nov 9, 2017 · 5 comments
Closed

rpc performance degradation #216

ugorji opened this issue Nov 9, 2017 · 5 comments

Comments

@ugorji
Copy link
Owner

ugorji commented Nov 9, 2017

@markgoodhead made a comment in #113 (see #113 (comment) )

Bump for this - we've just identified a recent change to master here which had a dramatic effect (negatively) on our application RPC performance and don't yet know which commits to master have done this.

@ugorji
Copy link
Owner Author

ugorji commented Nov 9, 2017

@markgoodhead

I know I removed buffering from the RPC codebase, as it forced the potential consumption of more bytes than necessary. To manage this, we exposed a BufferedReader and BufferedWriter accessors. This wasn't a good design.

I will add a NewReadWriteCloser function that takes a buf size and allows you create a bufio wrapper and pass that in. Hopefully that works for you.

@ugorji ugorji closed this as completed in 8c44cd4 Nov 9, 2017
@markgoodhead
Copy link

Yes it was related to the buffer changes - we'd already resolved by essentially implementing what you suggested:

type bufferedConn struct {
	io.ReadCloser
	*bufio.Writer
}

...

bc := bufferedConn{
		ReadCloser: conn,
		Writer:     bufio.NewWriterSize(conn, 131072),
}
var mh codec.MsgpackHandle
rpcCodec := codec.MsgpackSpecRpc.ClientCodec(bc, &mh)

@ugorji
Copy link
Owner Author

ugorji commented Nov 10, 2017

This is good, and much better (IMO) than increasing the surface area of the API with a new function when it is just as easy to create yourself.

I will quickly remove the NewReadWriteCloser() function and exposed type, and just update the documentation with some sample code (basically what you have here).

@ugorji ugorji reopened this Nov 10, 2017
ugorji added a commit that referenced this issue Nov 10, 2017
…tead

Instead of adding a new exposed type and NewReadWriteCloser(...)
convenience function, just document how a user can create a
buffered connection for use in rpc.

Also, there is no downside to doing a buffer during write.
There is only a downside during read.

Consequently, we will use a buffer internally
when passed a non-buffered ReadWriteCloser.

Updates #216
@ugorji
Copy link
Owner Author

ugorji commented Nov 10, 2017

Closed via f894406

@ugorji ugorji closed this as completed Nov 10, 2017
@ugorji
Copy link
Owner Author

ugorji commented Nov 10, 2017

@markgoodhead with the new change, you shouldn't have to create a bufferedConn anymore.

The downside to buffering only existed during a read, as we might read more than we should into the buffer without providing a way to see it.

However, during write, it is completely ok to buffer, as long as we flush at the end of each write (which we always do). Consequently, I added write buffering back implicitly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants