Description
Feature request
Make object upload size limit at bucket level configurable (even batter if you extend it to folder level)
Is your feature request related to a problem? Please describe.
It will help one to limit users to uploading a limited file size to a bucket/folder, (as an example profile pics, max needed 1MB)
Since frontend file limit check can be bypassed via some tools, so it will be batter to have some checks at backend before upload
Even we have a configurable setting to limit the max file size, that will not help much, It will open up to a user to upload at that max limit, which as per project/product structure might not needed all the time. and the issue here is a user might full the storage bucket with raw data.
Describe the solution you'd like
- The way you check the file limit as a whole for any file uploads, let that can be configurable at the bucket level.
- Any extension
- Any policy check that can help (like with
storage.filesize()
)
A clear and concise description of what you want to happen.
I want, when user upload a file to a bucket, I can check for the file size before uploading into the bucket. at backend side
Let say your limit is 50MB
So, I can able to configure my buckets somewhere between (0-50MB) as per my/project need like,
avtar_bucket (max_file_limit :1MB)
project_bucket (max_file_limit :5MB)
large_bucket (max_file_limit :50MB)
..........
..........
..........
that will help me use my storage precisely with my storage volume limits
Describe alternatives you've considered
Otherwise we have to use any server function just for that all the time before uploading file to storage bucket, like doing same thing again one more time, (one for my project buckets limit check running at my server code, and other by your(supabase) limit check running at your code.)
That will ignore the real use of supabase storage api to upload it directly from client side.