-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2/N][zero serialization] Make Batcher operate on chunks without ser/deser #700
[2/N][zero serialization] Make Batcher operate on chunks without ser/deser #700
Conversation
@@ -160,7 +160,7 @@ func TestBatchTrigger(t *testing.T) { | |||
assert.Nil(t, err) | |||
count, size := encodingStreamer.EncodedBlobstore.GetEncodedResultSize() | |||
assert.Equal(t, count, 1) | |||
assert.Equal(t, size, uint64(16384)) | |||
assert.Equal(t, size, uint64(26630)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Occasional data point about the data size reduction with the new chunk encoding with Gnark(that's separately done): the 16384
is the theoretical size (4 bytes/symbol * numSymbols in the chunk), and 26630
is the number of bytes from Gob encoding. The new chunk encoding will achieve the former.
disperser/batcher/batcher_test.go
Outdated
assert.Equal(t, uint64(27631), size) // Robert checks it | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove Robert checks it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
node/grpc/server_load_test.go
Outdated
Bundles: make(core.Bundles), | ||
blobMessagesByOp[opID] = append(blobMessagesByOp[opID], &core.EncodedBlobMessage{ | ||
BlobHeader: blobHeaders[i], | ||
EncodedBundles: make(map[core.QuorumID]*core.ChunksData), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ie. EncodedBundles?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Why are these changes needed?
This eliminates both the deserialization and serialization of chunks at the Batcher.
This matters because the cost on just the serialization of chunks can be significant: e.g. a large operator took 61s to serialize the dispersal request
Such perf/cost is on the critical path of E2E flow, so by eliminating them (as well as eliminating the cost to deserialize chunks from Encoder) will contribute directly the E2E perf.
Testing
In addition to the unit test, also tested it in Preprod, with following setup
And the Batcher was tested in these two cases:
This shows the compatibility/safety of:
Checks