-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ownCloud remaining Quota bug #9245
Comments
@Y4PHI which ocis version are you using, also which architecture are you running on? Where are you setting that quota? I just tried a few things myself, but I was not able to reproduce the problem. |
Please note that the available quota is adjusted when the disk space is not enough. The ocis process in the storage-users service does a syscall to the disk and asks for the remaining space. |
@Y4PHI Is this still an issue? We got no reaction so far. |
That whole thing is on a Minio (S3) storage backend and the bucket has no limit. |
If I create the space and then change the quota to 2.5TB and then click on Change Quota again, it doesn't say 2.5TB but unlimited. Otherwise it doesn't matter what I call the space, the error is the same |
Can you post the response of the /me/drives call? |
The system is provisioned with a dedicated storage capacity of 32 terabytes. It utilizes Minio, an S3-compatible object storage service, configured to support an unlimited bucket size, meaning there is no predefined limit on the amount of data that can be stored within a single bucket. There is a discrepancy in the quota calculations displayed in the web interface, although the client application appears to function correctly and displays the expected quota values. When new files are added to the storage, the total quota displayed is altered, which is incorrect because the quota should remain consistent regardless of file insertions. The underlying issue might be related to errors in the system's handling of quota calculations when dealing with unlimited bucket sizes, causing the incorrect quota display and the changes observed when inserting files. |
Here is the response of the /me/drives. The WebDAV URL has been removed and the IDs have been modified '''{ |
Just go give some more information here. One of our teammembers is reporting issues while uploading to a space. We want to upload a 300gb file to one of our spaces. Space configuration This upload is now failing because of the incorrect calculated unused quota. Just a reminder, that this bucket (s3 storage backend) is a Minio instance with 80TB storage. It's far away from being full as of now. |
Weird. Needs debugging. |
Thanks! If you need help debugging this or need input from our side feel free to contact me. |
Hi. I have some additional information that could be important. As I said we've a 1TB SSD where OCIS (Dockerized) is running on and a storage server (s3) with around 80TB. We've now observed a possible correlation between the free disk space of the 1TB SSD and the available quota. We've started a upload at around 615GB available disk space (of the 1TB SSD). The quota was around 610-620 GB too. We see the available quota going down at the same time. We can only compare the quota in OCIS and the available disk space (of the 1TB SSD) in our monitoring which is getting refreshed every 60s. And they are around the same number + / - a current upload deviation. So our guess is, that the "available quota" is in fact the available space on the disk, or the directory OCIS is storing its files and not the actual S3 storage behind it. That's legit, but a bit confusing. |
Move the calculation of the available size to the blobstore interface. We were just returning the available size of the local disk, which is wrong when using the S3 blobstore. Also, as the S3 blobstore doesn't have a concept of available size, we don't return a 'remaining' for S3 backed spaces with unlimited quota. Fixes: owncloud/ocis#9245
Move the calculation of the available size to the blobstore interface. We were just returning the available size of the local disk, which is wrong when using the S3 blobstore. Also, as the S3 blobstore doesn't have a concept of available size, we don't return a 'remaining' for S3 backed spaces with unlimited quota. Fixes: owncloud/ocis#9245
In certain setups the storage is not able to report the remaining size of a space. E.g. when no quota is set and the space is using the S3 blob storage drive. In this case the graph API response will now not include the `remaining` properyt in the quota. Fixes: owncloud#9245
Move the calculation of the available size to the blobstore interface. We were just returning the available size of the local disk, which is wrong when using the S3 blobstore. Also, as the S3 blobstore doesn't have a concept of available size, we don't return a 'remaining' for S3 backed spaces with unlimited quota. Fixes: owncloud/ocis#9245
Move the calculation of the available size to the blobstore interface. We were just returning the available size of the local disk, which is wrong when using the S3 blobstore. Also, as the S3 blobstore doesn't have a concept of available size, we don't return a 'remaining' for S3 backed spaces with unlimited quota. Fixes: owncloud/ocis#9245
In certain setups the storage is not able to report the remaining size of a space. E.g. when no quota is set and the space is using the S3 blob storage drive. In this case the graph API response will now not include the `remaining` properyt in the quota. Fixes: owncloud#9245
@tbsbdr @JammingBen Reopening since there is still the issue with the consistency between the two views in web. See: #9245 (comment) Should I open a web issue for that? |
I opened owncloud/web#11679 for web |
Describe the bug
We have the problem that if we set the quota above 884.2GB, only 884.2GB is displayed with the remaining quota.
Expected behavior
If we set the total quota to 2.5TB as desired, only 884.2GB will be displayed with the available quota. If I set the total quota below this 884.2GB, it is displayed correctly. We have over 40TB of storage available
The text was updated successfully, but these errors were encountered: