Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client: use long-lived streams for Append RPC #458

Open
ijsong opened this issue May 24, 2023 · 0 comments
Open

client: use long-lived streams for Append RPC #458

ijsong opened this issue May 24, 2023 · 0 comments

Comments

@ijsong
Copy link
Member

ijsong commented May 24, 2023

In #449, we update the Append RPC from unary to stream. The client API will now establish a stream, send an AppendRequest, receive an AppendResponse, and close the stream. This change allows us to maintain the semantics of the Append API and simplifies the underlying code.

// Append stores data to the log stream specified with the topicID and the logStreamID.
// The backup indicates the storage nodes that have backup replicas of that log stream.
// It returns valid GLSN if the append completes successfully.
func (c *LogClient) Append(ctx context.Context, tpid types.TopicID, lsid types.LogStreamID, data [][]byte) ([]snpb.AppendResult, error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := c.rpcClient.Append(ctx)
if err != nil {
return nil, err
}
defer func() {
_ = stream.CloseSend()
}()
req := &snpb.AppendRequest{
TopicID: tpid,
LogStreamID: lsid,
Payload: data,
}
err = stream.Send(req)
if err != nil {
return nil, err
}
rsp, err := stream.Recv()
if err != nil {
return nil, err
}
return rsp.Results, nil
}

Making a new stream for every API call is inefficient, leading to repeated RPC initiation.
According to gRPC best practice, we can improve it by reusing long-lived streams. To do that, we need to change the client Append API.

ijsong added a commit that referenced this issue May 24, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 1, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 1, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 1, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 4, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 4, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 7, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 7, 2023
### What this PR does

This patch changes the Append RPC handler to support pipelined requests and does not change the client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we can improve the existing Append API more efficiently [using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

### Which issue(s) this PR resolves

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for pipelining generic Append RPC said in #441.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant