Skip to content

Investigate perfomance for bigger gRPC message sizes #77

Open
fyrchik opened this issue Feb 28, 2023 · 0 comments
Open

Investigate perfomance for bigger gRPC message sizes #77

fyrchik opened this issue Feb 28, 2023 · 0 comments

Comments

@fyrchik
Copy link

fyrchik commented Feb 28, 2023

By default it is 4 MiB. This leads to big objects being split into 4MiB chunks, thus for MaxObjectSize=64 MiB we have at least 16 messages sent (don't forget about signing and verification). For custom deployments where we have full control over the network we could set this size depending on MaxObjectSize both on client and server.

In this task:

  1. Set grpc.MaxRecvMsgSize in node to some high value (70 MiB).
  2. Perform benchmarks of 64 MiB objects with custom client build (see cli: Fix default buffer size for object PUT nspcc-dev/neofs-node#2243 for an example on what needs to be changed).
  3. If observations support the hypothesis, add a config parameter for this and create tasks to support this on client:
  • api-go and sdk-go
  • k6
  • frostfs-cli parameter

In theory this enables future optimizations, such as being able to replicate an object from the blobstor without unmarshaling. (in this case, also check that validation is done when the object is received, we don't want to propagate possibly corrupted data across the cluster, see https://www.usenix.org/system/files/conference/fast17/fast17-ganesan.pdf).

Somewhat related TrueCloudLab/frostfs-api#9

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant