-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 sink batch size stuck at 2.4 MB sized files #21696
Comments
Hi @ElementTech ! I think this is the case, but do I take it to mean that you tried increasing Note that |
Hey @jszwedko, yes, I've played around with those values both up down. Just for sake of testing as you can see I've put all numbers extremely high but still no difference in behavior. I should also note that each of those I might be wrong but I'm also using Thanks! |
Apologies for the delayed response!
Gotcha, that is interesting.
One shot in the dark, can you try setting
In Vector's architecture, buffers appear in front of sinks and so, from the sink perspective, it makes no difference if the fronting buffer is |
I don't think that issue is related. That issue is more about doing incremental upload of batches when large batches are configured to avoid needing to keep large batches in memory until they are sent (and thus bloat Vector's memory use). |
Problem
I have Vector installed in Kubernetes in AWS. I am using SQS as a source and S3 as a sink. No matter how high I set batching and buffer parameters, at max-load of event ingestion, my s3 bucket receives them at exactly 2.4 MB batches. When an event spike ends, it releases the rest of the events in smaller files until finished.
Configuration
Version
0.42.0-distroless-libc
Debug Output
No response
Example Data
No response
Additional Context
I have two environments. The only difference between them is the batch.timeout_secs parameter. In my dev environment, it is set to 60, and in my production to 1800. The exact same issue (2.4 MB sized files) happens in both.
References
No response
The text was updated successfully, but these errors were encountered: