You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"S3Proxy limits the size of non-chunked requests to 32 MB by default but you can override this via the s3proxy.v4-max-non-chunked-request-size property. Please open a new issue if AWS has a different limit because I set this several years ago. Another workaround is to use multi-part uploads." - gaul
Opening this issue a bit late :) because I am using s3proxy more again.
Looks like Amazon supports huge chunk size now (5GB max). Minio defaults to 128MB. I think s3proxy limit should be raised to at least the minio limit. I'm not sure at what point it becomes resource intensive but larger sizes are definitely needed these days. Another issue with small chunk sizes is TCP can take a few seconds to work up to its full speed on higher latency connections (and mtu negotiation etcetc), on faster connections by the time it's starting to do that, the part is already finished and it needs to start all over again. So some people would get more throughput with a higher value.
Anyway, thanks for all your hard work on this project!
Update: there may be some confusion here too - I AM using multipart uploads, not non-chunked - but both should be raised it sounds like. https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html - I am also using the filesystem backend in case that matters here.
Update 2:
Ok so, setting s3proxy.v4-max-non-chunked-request-size="330554432" resolved the issue, in the docker image this is being set in the entrypoint script with ${S3PROXY_V4_MAX_NON_CHUNKED_REQ_SIZE:-33554432}". Slightly confusing because i figured if i'm doing a multipart upload that would be considered 'chunking'. ...but yes, totally recommend some higher defaults in all places, so this isn't an entirely worthless post.
The text was updated successfully, but these errors were encountered:
Changing the default to match Minio seems like a safe choice. Having a smaller upper limit can help lower-memory instances. Can you submit a pull request to change the default?
"S3Proxy limits the size of non-chunked requests to 32 MB by default but you can override this via the s3proxy.v4-max-non-chunked-request-size property. Please open a new issue if AWS has a different limit because I set this several years ago. Another workaround is to use multi-part uploads." - gaul
Opening this issue a bit late :) because I am using s3proxy more again.
Looks like Amazon supports huge chunk size now (5GB max). Minio defaults to 128MB. I think s3proxy limit should be raised to at least the minio limit. I'm not sure at what point it becomes resource intensive but larger sizes are definitely needed these days. Another issue with small chunk sizes is TCP can take a few seconds to work up to its full speed on higher latency connections (and mtu negotiation etcetc), on faster connections by the time it's starting to do that, the part is already finished and it needs to start all over again. So some people would get more throughput with a higher value.
Anyway, thanks for all your hard work on this project!
Update: there may be some confusion here too - I AM using multipart uploads, not non-chunked - but both should be raised it sounds like. https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html - I am also using the filesystem backend in case that matters here.
Update 2:
Ok so, setting s3proxy.v4-max-non-chunked-request-size="330554432" resolved the issue, in the docker image this is being set in the entrypoint script with ${S3PROXY_V4_MAX_NON_CHUNKED_REQ_SIZE:-33554432}". Slightly confusing because i figured if i'm doing a multipart upload that would be considered 'chunking'. ...but yes, totally recommend some higher defaults in all places, so this isn't an entirely worthless post.
The text was updated successfully, but these errors were encountered: