Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix s3 upload performance regression #98

Merged
merged 5 commits into from
Jan 28, 2024

Conversation

pracucci
Copy link
Contributor

@pracucci pracucci commented Jan 26, 2024

The symptom

The PR #66 introduced a regression in the S3 upload performance. After that commit, S3 uploaded are significantly slower. Here you can see how the upload latency changed for one of our Grafana Mimir clusters running in AWS:

Screenshot 2024-01-26 at 14 46 33

Change deployed to prod on 10/06, where the latency increase. Since the upload performance in Mimir is not critical (e.g. doesn't affect neither read and write path latency), it was undetected for quite some time. Until today.

## The cause

Why the regression? Minio has some optimizations when the reader implements io.ReaderAt because it can leverage on .ReadAt(). We're used to call Upload() with os.File as a reader (I think Thanos does the same). After #66 wrapped the reader with the timing reader, which doesn't implement io.ReaderAt and so Minio couldn't optimize the upload anymore.

The fix

In this PR I propose a fix to the issue, introducing timingReaderSeekerReaderAt which embeds timingReaderSeeker, which in turn embeds timingReader.

To simplify the code, I've removed the usage of nopSeekerCloserWithSize() and NopCloserWithSize() in the Upload and pushed down to timingReader whether we want it to Close() the wrapped reader or not. The renaming from timingReadCloser to timingReader follows this change, given it takes in input io.Reader.

Manual test

I've manually that this PR fixes the upload speed regression, running a test in AWS which uploads 1GB object to S3 multiple times.

Before this PR

S3 bucket not wrapped with metrics:

With metrics: false Elapsed: 2.262466521s
With metrics: false Elapsed: 2.328912616s
With metrics: false Elapsed: 2.312762143s

S3 bucket wrapped with metrics (see higher latency):

With metrics: true Elapsed: 9.590695888s
With metrics: true Elapsed: 8.402357109s
With metrics: true Elapsed: 7.858041235s

After this PR

S3 bucket not wrapped with metrics:

With metrics: false Elapsed: 2.767424849s
With metrics: false Elapsed: 2.066829353s
With metrics: false Elapsed: 2.114329674s

S3 bucket wrapped with metrics (see no higher latency):

With metrics: true Elapsed: 2.728111635s
With metrics: true Elapsed: 2.216537896s
With metrics: true Elapsed: 2.392596594s

Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
Signed-off-by: Marco Pracucci <marco@pracucci.com>
@pracucci pracucci marked this pull request as ready for review January 26, 2024 14:49
Copy link
Contributor

@MichaHoffmann MichaHoffmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@yeya24 yeya24 merged commit bdadaef into thanos-io:main Jan 28, 2024
7 checks passed
@pracucci pracucci deleted the fix-s3-upload-regression branch January 29, 2024 08:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants