Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
38343: storage: introduce concurrent Raft proposal buffer r=tbg a=nvanbenschoten This change introduces a new multi-producer, single-consumer buffer for Raft proposal ingestion into the Raft replication pipeline. This buffer becomes the new coordination point between "above Raft" goroutines, who have just finished evaluation and want to replicate a command, and a Replica's "below Raft" goroutine, which collects these commands and begins the replication process. The structure improves upon the current approach to this interaction in three important ways. The first is that the structure supports concurrent insertion of proposals by multiple proposer goroutines. This significantly increases the amount of concurrency for non-conflicting writes within a single Range. The proposal buffer does this without exclusive locking using atomics to index into an array. This is complicated by the strong desire for proposals to be proposed in the same order that their MaxLeaseIndex is assigned. The buffer addresses this by selecting a slot in its array and selecting a MaxLeaseIndex for a proposal in a single atomic operation. The second improvement is that the new structure allows RaftCommand marshaling to be lifted entirely out of any critical section. Previously, the allocation, marshaling, and encoding processes for a RaftCommand was performed under the exclusive Replica lock. Before 91abab1, there was even a second allocation and a copy under this lock. This locking interacted poorly with both "above Raft" processing (which repeatedly acquires a shared lock) and "below Raft" processing (which occasionally acquires an exclusive lock). The new concurrent Raft proposal buffer structure is able to push this allocation and marshaling completely outside of the exclusive or shared Replica lock. It does so despite the fact that the MaxLeaseIndex of the RaftCommand has not been assigned yet by splitting marshaling into two steps and using a new "footer" proto. The first step is to allocate and marshal the majority of the encoded Raft command outside of any lock. The second step is to marshal just the small "footer" proto with the MaxLeaseIndex field into the same byte slice, which has been pre-sized with a small amount of extra capacity, after the MaxLeaseIndex has been selected. This approach lifts a major expense out of the Replica mutex. The final improvement is to increase the amount of batching performed between Raft proposals. This reduces the number of messages required to coordinate their replication throughout the entire replication pipeline. To start, batching allows multiple Raft entries to be sent in the same MsgApp from the leader to followers. Doing so then results in only a single MsgAppResp being sent for all of these entries back to the leader, instead of one per entry. Finally, a single MsgAppResp results in only a single empty MsgApp with the new commit index being sent from the leader to followers. All of this is made possible by `Step`ping the Raft `RawNode` with a `MsgProp` containing multiple entries instead of using the `Propose` API directly, which internally `Step`s the Raft `RawNode` with a `MsgProp` containing only one entry. Doing so demonstrated a very large improvement in `rafttoy` and is showing a similar win here. The proposal buffer provides a clean place to perform this batching, so this is a natural time to introduce it. ### Benchmark Results ``` name old ops/sec new ops/sec delta kv95/seq=false/cores=16/nodes=3 67.5k ± 1% 67.2k ± 1% ~ (p=0.421 n=5+5) kv95/seq=false/cores=36/nodes=3 144k ± 1% 143k ± 1% ~ (p=0.320 n=5+5) kv0/seq=false/cores=16/nodes=3 41.2k ± 2% 42.3k ± 3% +2.49% (p=0.000 n=10+10) kv0/seq=false/cores=36/nodes=3 66.8k ± 2% 69.1k ± 2% +3.35% (p=0.000 n=10+10) kv95/seq=true/cores=16/nodes=3 59.3k ± 1% 62.1k ± 2% +4.83% (p=0.008 n=5+5) kv95/seq=true/cores=36/nodes=3 100k ± 1% 125k ± 1% +24.37% (p=0.008 n=5+5) kv0/seq=true/cores=16/nodes=3 16.1k ± 2% 21.8k ± 4% +35.21% (p=0.000 n=9+10) kv0/seq=true/cores=36/nodes=3 18.4k ± 3% 24.8k ± 2% +35.29% (p=0.000 n=10+10) name old p50(ms) new p50(ms) delta kv95/seq=false/cores=16/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv95/seq=false/cores=36/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv0/seq=false/cores=16/nodes=3 2.86 ± 2% 2.80 ± 0% -2.10% (p=0.011 n=10+10) kv0/seq=false/cores=36/nodes=3 3.87 ± 2% 3.80 ± 0% -1.81% (p=0.003 n=10+10) kv95/seq=true/cores=16/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv95/seq=true/cores=36/nodes=3 0.70 ± 0% 0.70 ± 0% ~ (all equal) kv0/seq=true/cores=16/nodes=3 7.97 ± 2% 5.86 ± 2% -26.44% (p=0.000 n=9+10) kv0/seq=true/cores=36/nodes=3 15.7 ± 0% 11.7 ± 4% -25.61% (p=0.000 n=8+10) name old p99(ms) new p99(ms) delta kv95/seq=false/cores=16/nodes=3 2.90 ± 0% 2.94 ± 2% ~ (p=0.444 n=5+5) kv95/seq=false/cores=36/nodes=3 3.90 ± 0% 3.98 ± 3% ~ (p=0.444 n=5+5) kv0/seq=false/cores=16/nodes=3 8.90 ± 0% 8.40 ± 0% -5.62% (p=0.000 n=10+8) kv0/seq=false/cores=36/nodes=3 11.0 ± 0% 10.4 ± 3% -5.91% (p=0.000 n=10+10) kv95/seq=true/cores=16/nodes=3 4.50 ± 0% 3.18 ± 4% -29.33% (p=0.000 n=4+5) kv95/seq=true/cores=36/nodes=3 11.2 ± 3% 4.7 ± 0% -58.04% (p=0.008 n=5+5) kv0/seq=true/cores=16/nodes=3 11.5 ± 0% 9.4 ± 0% -18.26% (p=0.000 n=9+9) kv0/seq=true/cores=36/nodes=3 19.9 ± 0% 15.3 ± 2% -22.86% (p=0.000 n=9+10) ``` As expected, the majority of the improvement from this change comes when writing to a single Range (i.e. a write hotspot). In those cases, this change (and those in the following two commits) improves performance by up to **35%**. NOTE: the Raft proposal buffer hooks into the rest of the Storage package through a fairly small and well-defined interface. The primary reason for doing so was to make the structure easy to move to a `storage/replication` package if/when we move in that direction. Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
- Loading branch information