-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A user could create a DOS by uploading files that are too large #145
Comments
In our meeting today we decided on the S3-as-tmp-storage solution, along with enabling the SDK to query for account storage limits, so that we can do some basic "no you can't upload a 100TB file" protections by tracking the amount of in-flight data and making sure that doesn't exceed the user's quote (or 5x the user's quota if we have 5 separate SFTP servers, but whatever). |
Some thoughts about implementation here:
The idea of the microservice was to prevent the need for direct integration with S3 in our systems, and so this would be a useful approach unless the intention is to eliminate the microservice. I'll raise this with the team during our rclone meeting today. |
On another note: S3 has a limit of 5GB per POST, which means we either will have a 5GB limit via SFTP or we should do multipart uploads. The upload service as written does not currently support multipart uploads, though this would not be a bad idea to improve. This article talks about how to do multipart uploads + presigned urls. This is a bit moot though because the permanent back end itself also does not support multipart uploads, which means it cannot support files larger than 5GB without a similar multipart enhancement. |
We did decide to move to the S3 microservice! |
My naive implementation plan needs improvement. I'm documenting it here to just record the evolution: Where I started
This approach is not ideal because data is being sent around a full blown 2x more than necessary (we write to S3 to tmp storage, then read from that and write AGAIN to "unprocessed" storage!) Where I'm goingThe I think it would be good to update the SDK to optionally accept an S3 url instead of file data, and if that happens it should skip the "upload" step and instead just jump to registering the record. This should also pair with the SDK to providing direct access to the
This would also mean we don't need to have the SFTP service directly interact with the upload service, since it's just getting the presigned post via the API. Another benefit of this is that the API can handle size allocation / validation and prevent that form of potential abuse. |
I began to explore this path with a bit more depth -- I'm fairly sure that it's not actually possible to re-use a presigned posat and specify a byte range to post, and instead this really is the purpose of multipart upload. I spoke with @cecilia-donnelly and we agreed that although multipart upload is on the road map for backend / API support, it makes sense to just implement it on this service for now in the interests of resolving this issue that is affecting users today. Eventually that should be replaced with an SDK-driven multipart upload. |
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
We were using the local file system for storing files temporarily as they were being uploaded to the SFTP service. We knew this was not a long term solution for several reasons, including the security risk of a denial of service by filling up the disk space. We're moving to the long term solution of using S3 directly for that upload. We can't use the existing upload-service and permanent API for this because, at least for now, it requires us to know the size of the file AND does not allow us to upload over multiple parts. For this reason we're integrating with S3 directly. Issue #145 A user could create a DOS by uploading files that are too large
AWS has a limit of 5GB uploads for single-part uploads, which is the reason 5GB was selected here. This does not necessarily prevent the risk of a DOS but it does require a user to start multiple simultaneous uploads. Issue #145 A user could create a DOS by uploading files that are too large
We didn't end up implementing multipart upload but we made the SFTP service have more temporary storage. I think that was the end of this, right? |
This is still a vulnerability (though less extreme than it was before) -- I would leave the issue open until permanent is ready to prioritize resolving it. |
(This is related to #144)
We currently store in-transit data locally AND have no limits on how large a given uploaded file can be. This means anybody could deny service by simply uploading a file that is larger than the locally configured storage space.
I believe the only solution here is to stop using local storage for temporary space; this would mean using S3 as the temporary storage location. We might, separately, also want to create limits to how much data can be uploaded by a given user (this could be rooted in knowing how much storage space they have available on their account)
The text was updated successfully, but these errors were encountered: