Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ocis blobstore: shard blobs by space and segment blob paths #3557

Closed
exalate-issue-sync bot opened this issue Apr 21, 2022 · 1 comment · Fixed by #3564
Closed

ocis blobstore: shard blobs by space and segment blob paths #3557

exalate-issue-sync bot opened this issue Apr 21, 2022 · 1 comment · Fixed by #3564
Labels
Type:Story User Story

Comments

@exalate-issue-sync
Copy link

As an admin I may want to archive complete spaces and navigate to blobs by their path in the ocis blobstore. On a POSIX fs the output of ls will take longer the more blobs have been stored.

The first step to make the number of blobs more manageable is to shard them by space. The blobs should by default be stored under the same path as the space so an admin could use existing unix tools to move, backup and restore a space including metadata and blob data.
The path layout should be configurable so all blobs could be put in the same directory if the spaceid was not taken into account. This is a path for a future content addressed storage, where blobs are stored by their hash. Then again, that would be a new blobstore, so it is not relevant for now.

The second step is to segment the name of blobs, similar to the nodes in the space metadata. A blob with id 582d68ea-1843-4245-9289-73e33b93044f should be stored as 58/2d/68/ea/-1/843-4245-9289-73e33b93044f. Or maybe a configurable amount of segments.

@exalate-issue-sync
Copy link
Author

Jörn Friedrich Dreyer commented: reva PR cs3org/reva#2763 merged
ocis PR #3564 may need rebase to fix tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type:Story User Story
Projects
None yet
Development

Successfully merging a pull request may close this issue.

0 participants