Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make object upload size limit at bucket level configurable #134

Closed
rahul3v opened this issue Apr 9, 2022 · 15 comments
Closed

Make object upload size limit at bucket level configurable #134

rahul3v opened this issue Apr 9, 2022 · 15 comments
Labels
accepted Accepted for further investigation and prioritisation enhancement New feature or request Hacktoberfest

Comments

@rahul3v
Copy link
Contributor

rahul3v commented Apr 9, 2022

Feature request

Make object upload size limit at bucket level configurable (even batter if you extend it to folder level)

Is your feature request related to a problem? Please describe.

It will help one to limit users to uploading a limited file size to a bucket/folder, (as an example profile pics, max needed 1MB)
Since frontend file limit check can be bypassed via some tools, so it will be batter to have some checks at backend before upload

Even we have a configurable setting to limit the max file size, that will not help much, It will open up to a user to upload at that max limit, which as per project/product structure might not needed all the time. and the issue here is a user might full the storage bucket with raw data.

Describe the solution you'd like

  • The way you check the file limit as a whole for any file uploads, let that can be configurable at the bucket level.
  • Any extension
  • Any policy check that can help (like with storage.filesize())

A clear and concise description of what you want to happen.

I want, when user upload a file to a bucket, I can check for the file size before uploading into the bucket. at backend side

Let say your limit is 50MB
So, I can able to configure my buckets somewhere between (0-50MB) as per my/project need like,

 avtar_bucket  (max_file_limit :1MB)
 project_bucket  (max_file_limit :5MB)
 large_bucket    (max_file_limit :50MB)
  ..........
  ..........
  ..........

that will help me use my storage precisely with my storage volume limits

Describe alternatives you've considered

Otherwise we have to use any server function just for that all the time before uploading file to storage bucket, like doing same thing again one more time, (one for my project buckets limit check running at my server code, and other by your(supabase) limit check running at your code.)

That will ignore the real use of supabase storage api to upload it directly from client side.

@rahul3v rahul3v added the enhancement New feature or request label Apr 9, 2022
@rahul3v
Copy link
Contributor Author

rahul3v commented May 3, 2022

@kiwicopple, @thebengeu, @alaister, Any comments...!

@alaister
Copy link
Member

Hey @rahul3v,
Really sorry for the delayed response - we've been busy!

We discussed this in our most recent storage meeting, and we think it's a great idea!

I've added it to our internal to-do list but can't give you any timeline on when this might be implemented.

Thanks again, and as always, PRs are welcome :)

@rahul3v
Copy link
Contributor Author

rahul3v commented May 24, 2022

@alaister, if possible extend it to folder level not to deep needed, 1-folder-level will be ok or at most 2-folder-level :)

@rlee1990
Copy link

This would be awesome. Having the max storage size for all buckets set to one size is rough.

@fenos fenos added the accepted Accepted for further investigation and prioritisation label Sep 7, 2022
@fenos
Copy link
Contributor

fenos commented Mar 6, 2023

This is now being shipped with #277

@fenos fenos closed this as completed Mar 6, 2023
@jopfre
Copy link

jopfre commented Mar 9, 2023

Is this functionality read to use now? If so how to we use it?

@KhaledGabr
Copy link

@fenos Related to this. Could we define a max for the bucket itself. (I want to set a maximum for the entire bucket (sum of all files inside it)

@inian
Copy link
Member

inian commented Apr 18, 2023

These options should be available when you create or edit a bucket in the dashboard. And in the API, client library too.

image

@haexhub
Copy link

haexhub commented Nov 15, 2023

Hi @inian .
I think what @KhaledGabr means, is to set the a limit of the bucket itself, not the individual file size.
And I support that idea. It would be really nice, if we were able to set a maximimum bucket size.
So for example, I want to give a user a bucket, which can be max 1 GB. I don't care how much files the user is uploading, nor how large each Individual file is, as long as the user can't upload more than 1GB in his bucket.

@inian
Copy link
Member

inian commented Nov 15, 2023

@KhaledGabr
Copy link

KhaledGabr commented Nov 15, 2023

Hi @inian, I am looking for a maxBucketSize flag as opposed to setting a limit for each file.

const { data, error } = await supabase.storage.createBucket('avatars', {
  public: true,
  allowedMimeTypes: ['image/*'],
  maxFileSize: '1MB',     <--  max size for each file in the bucket
  maxBucketSize:'1GB'   <-- max size of the entire bucket. 
})

So, in the example above, if the user uploaded a 1000 images each 1MB .. it would consume the limit.

@inian
Copy link
Member

inian commented Nov 15, 2023

Ah you want a limit on the total size of the files in the bucket. What is your use case for this feature?

@KhaledGabr
Copy link

@inian I want to set upper limit primarily to control cost. Each user in my system gets a bucket and I should have some restrictions on how much they upload, and be able to change bucket size based on whether they are paid users or no. Having no restrictions can have significant cost risk.

This potentially could be solved with RLS, the only issue with that I will need to let the user know that they exceeded the limit in an error message, which is not possible with RLS.

@haexhub
Copy link

haexhub commented Nov 15, 2023

Exactly. Its not just the cost itself. Its also just the limit of available space. We always have a limit amount of disk space and I want to be able, to share that evenly. I want to give each user a bucket of a specific size, so I'm able, to divide the available space evenly and not having one user consuming all available space.

@li4man0v
Copy link

Since we can use SQL to operate on storage buckets, one can create insert/delete object triggers that will aggregate the total size of objects in a bucket. Then, if necessary, the maxFileSize can be set to min(maxFileSize, maxBucketSize - totalFileSize). This will prevent users from uploading files exceeding the max bucket size. I didn't implement this logic myself, but it sounds reasonable :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Accepted for further investigation and prioritisation enhancement New feature or request Hacktoberfest
Projects
None yet
Development

No branches or pull requests

9 participants