Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storagenode,client: pipelining Append RPC #441

Open
ijsong opened this issue May 10, 2023 · 0 comments
Open

storagenode,client: pipelining Append RPC #441

ijsong opened this issue May 10, 2023 · 0 comments

Comments

@ijsong
Copy link
Member

ijsong commented May 10, 2023

Motivation

Currently, Append RPC is a simple request-response pattern in which a StorageNode responds with a reply message once a client calls Append RPC. It makes architecture simple but also provides total order in the log easily.

We can improve this RPC pattern by using pipelining; you can imagine HTTP pipelining.

Design

Bidirectional streaming Append RPC

To implement pipelined RPC, the current Append RPC, which is unary, is insufficient. The Append RPC has to support a bidirectional streaming RPC pattern. Fortunately, we can do that easily by using the gRPC feature.

Nonblocking API

Append RPC should be a nonblocking call to support pipelined Append RPC. It is also critical to keep order among sequential RPCs. That means the GLSNs of msg1, msg2, and msg3 must be ordered sequentially according to the invoking sequence.

client.Append(ctx, topicID, msg1)
client.Append(ctx, topicID, msg2)
client.Append(ctx, topicID, msg3)
sequenceDiagram
    Client->>StorageNode: AppendRequest(msg1)
    Client->>StorageNode: AppendRequest(msg2)
    Client->>StorageNode: AppendRequest(msg3)
    StorageNode->>Client: AppendResponse(msg1)
    StorageNode->>Client: AppendResponse(msg2)
    StorageNode->>Client: AppendResponse(msg3)
Loading

Challenges

Global Order

It must keep global sequential order between Append and AppendTo RPCs. However, it is not trivial in nonblocking calls.

Although the client invokes Append before AppendTo, the msg2's GLSN can be before msg1 since those Append-variant RPCs might add logs to different log streams.

client.Append(ctx, topicID, msg1)
client.AppendTo(ctx, topicID, types.LogStreamID(2), msg2)
sequenceDiagram
    Client->>StorageNode/LogStream1: AppendRequest(msg1)
    Client->>StorageNode/LogStream2: AppendRequest(msg2)
    StorageNode/LogStream2->>Client: AppendResponse(msg2)
    StorageNode/LogStream1->>Client: AppendResponse(msg1)
Loading
ijsong added a commit that referenced this issue May 23, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. As a result of experimentations, this PR showed little overhead.
This change uses [reader-biased mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of
built-in RWMutex to avoid shared lock contention.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue May 23, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. As a result of experimentations, this PR showed little overhead.
This change uses [reader-biased mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of
built-in RWMutex to avoid shared lock contention.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue May 24, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. As a result of experimentations, this PR showed little overhead.
This change uses [reader-biased mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of
built-in RWMutex to avoid shared lock contention.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue May 24, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 1, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 1, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 1, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 4, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 4, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 7, 2023
This patch changes the Append RPC handler to support pipelined requests and does not change the
client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional
goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased
mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock
contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we
can improve the existing Append API more efficiently
[using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current
implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks
such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for
pipelining generic Append RPC said in #441.
ijsong added a commit that referenced this issue Jun 7, 2023
### What this PR does

This patch changes the Append RPC handler to support pipelined requests and does not change the client's API. Therefore, users can use Append API transparently.

Supporting pipelined requests can lead to overhead since it is necessary to have additional goroutines and concurrent queues. To lower additional overhead, this change uses [reader-biased mutex](https://github.com/puzpuzpuz/xsync#rbmutex) instead of built-in RWMutex to avoid shared lock contention. As a result of experimentations, this PR showed very little overhead. Furthermore, we can improve the existing Append API more efficiently [using a long-lived stream](https://grpc.io/docs/guides/performance/#general): the current implementation creates a new stream whenever calling Append API, which leads to unnecessary tasks such as RPC initiation. We can reuse long-lived streams by changing client API. See this issue at #458.

### Which issue(s) this PR resolves

This PR implements server-side parts of LogStreamAppender mentioned in #433. It also can be used for pipelining generic Append RPC said in #441.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant