Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement quota tracking options per ObjectStore. #10221

Closed
wants to merge 2 commits into from

Conversation

jmchilton
Copy link
Member

@jmchilton jmchilton commented Sep 15, 2020

Builds on #10212.

Overview

#6552 implemented the ability for admins to assign job outputs to different object stores at runtime (this could take into account tool/workflow injected parameters or just be based on user, tool, destination, cluster state, etc..). But all the stored data would consume the same quota - regardless of the source selected.

This pull request allows different object stores or different groups of object stores to have different quotas or no quota at all. This enables uses cases such as sending job to cheaper data when a user's quota is getting near full or allowing admin to setup tool and/of workflow parameters to send job outputs higher quality, more redundant storage based on user selected options or user preferences.

This is a substantial step forward toward allowing scratch-space histories, while I suspect we want to implement some higher level convince functions and interface around that (per history preferences, object store preferences types) - I think that would all be based on these abstractions - abstractions that allow even more flexibility for admins who require it.

Implementation

This adds the quota tag to XML/YAML object store declarations - that allow specifying a "quota source label" for each objectstore in a nested objectstore or disabling quota all together on objectstores.

The following quota block would assign all this storage to a quota source labelled with s3.

        <backend id="dynamic_s3" type="disk" weight="0">
            <quota source="s3" />
            <files_dir path="${temp_directory}/files_dynamic_s3"/>

Whereas this would disable quota usage for this object store altogether.

        <backend id="temp_disk" type="disk" weight="0">
            <quota enabled="false" />
            <files_dir path="${temp_directory}/files_cloud_scratch"/>

In order to implement this a new table/model has been added to track a user's usage per quota source label - namely UserQuotaSourceUsage. Object stores that did not have a source label are still tracked using the User model's disk_usage attribute. I've updated all the scripts that recalculate user usage.

UI + API

The quota dialog adds the option to pick a quota source label from those defined on the object stores, though this option only appears if quota source labels are configured.

Screen Shot 2020-09-28 at 8 45 33 PM

Likewise, by default the quota meter is unaffected but when multiple quota source labels are configured the meter becomes a link that shows the usage of each quota source.

Screen Shot 2020-09-28 at 8 47 19 PM

A new API /api/users/<user_id|current>/usage enables this.

Abstractions for #4840

While this PR adds significant complexity related to recalculating a User's quota - it does reduce the duplication, adds tests (made more useful by having fewer paths through the quota recalculation code), and bring object store information into the calculation. I think this is all stuff that would be needed for #4840 and currently missing.

Part of this establishes a pattern for how to exclude certain datasets from usage calculation both when it is being added (included in #4840) and when re-calculdated (not included in #4840).

The API endpoints for disk usage across object stores and the UI entry point for displaying that information will hopefully both enable a more robust implementation of #4840.

@jmchilton jmchilton force-pushed the quota_per_objectstore branch 6 times, most recently from 06842e5 to 090e02d Compare September 16, 2020 13:54
@jmchilton jmchilton force-pushed the quota_per_objectstore branch 2 times, most recently from fc219a0 to b42fda2 Compare September 25, 2020 00:26
@jmchilton jmchilton force-pushed the quota_per_objectstore branch 6 times, most recently from aa4c391 to 3f20af9 Compare September 29, 2020 01:05
@jmchilton jmchilton changed the title [WIP] Implement quota tracking options per ObjectStore. Implement quota tracking options per ObjectStore. Sep 29, 2020
@jmchilton jmchilton changed the title Implement quota tracking options per ObjectStore. [WIP] Implement quota tracking options per ObjectStore. Sep 29, 2020
@jmchilton jmchilton changed the title [WIP] Implement quota tracking options per ObjectStore. Implement quota tracking options per ObjectStore. Sep 30, 2020
@galaxybot galaxybot added this to the 21.01 milestone Sep 30, 2020
@jmchilton jmchilton force-pushed the quota_per_objectstore branch 3 times, most recently from 4c59300 to 1a0c743 Compare October 9, 2020 13:57
@jmchilton
Copy link
Member Author

In long conversation with @natefoo and @mvdbeek we decided this needs to go a bit further at least before being rolled out on to main.

  • Longer term we need to have the ability to copy from one objectstore to another asynchronously, but until that is ready there are certain copies that are effectively just changing the object_store_id on a dataset and those should be implemented - with quota recalculation, a UI, etc...
  • Histories need to be filterable by datasets in a given objectstore - so users can see data scheduled for deletion. This can just piggyback on existing UI filtering plumbing.

I'd also love a little summary of objectstore, usage, etc.. within a history - perhaps using disk usage per dataset widget Dannon demo'd years ago (@dannon do you have a link to that sitting in a branch somewhere?) - but that might be something that should be an iteration 2 type of thing.

@dannon
Copy link
Member

dannon commented Nov 20, 2020

@jmchilton I'll see if I can dig it up -- I know I have it somewhere and it'd be great for that to see use somewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants