-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement configure_runner, configure_metric #3104
Open
mpolson64
wants to merge
3
commits into
facebook:main
Choose a base branch
from
mpolson64:export-D66305614
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: Implements new Client methods `configure_experiment`, `configure_optimization`, and `configure_generation_strategy`. Creates new module api.utils.instantiation that holds functions for converting from Configs to core Ax objects. These functions do not do validation, which will live on the configs themselves and be implemented in a separate diff. Note that this diff also does not implement saving to DB, although this will happen after each of these three methods are called if a config is provided **Id especially like comment on our use of SymPy** here to parse through objective and constraint strings -- what we've wound up with is much less verbose and I suspect much less error prone than what exists now in InstantiationBase while also providing a more natural user experience (ex. not getting tripped up by spacing, automatically handling inequality simplification like `(x1 + x2) / 2 + 0.5 >= 0` --> `-0.5 * x1 - 0.5 * x2 <= 1`, etc.) without any manual string parsing on our end at all. Im curious what people think of this strategy overall. SymPy usage occurs in `_parse_objective`, `_parse_parameter_constraint`, and `_parse_outcome_constraint`. Specific RFCs: * We made the decision earlier to combine the concepts of "outcome constraint" and objective thresholds into a single concept to make things clearer for our users -- do we still stand by this decision? Seeing it in practice I think it will help our users a ton but I want to confirm this before we get too far into implementation * We discussed previously that if we were using strings to represent objectives we wanted users to be able to specify optimization direction via coefficients (ex objective="loss" vs objective="-loss") **but we did not decide which direction a positive coefficient would indicate**. In this diff Ive implemented things such that a positive coefficient indicates minimization but Im happy to change -- I dont think one is better than the other we just need to be consistent. * To express relative outcome constraints, rather than use "%" like we do in AxClient, we ask the user multiply their bound by the term "baseline" (ex. "qps >= 0.95 * baseline" will constrain such that the QPS is at least 95% of the baseline arm's qps). To be honest we do this to make things play nice with SymPy but I also find it clearer, though Im curious what you all think Differential Revision: D65826204
Summary: These methods are not strictly speaking "part of the API" but may be useful for developers and trusted partners. Each is fairly self explanatory. Differential Revision: D66304352
Summary: configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment. configure_runner is fairly straightforward and just sets experiment.runner configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric. Differential Revision: D66305614
facebook-github-bot
added
the
CLA Signed
Do not delete this pull request or issue due to inactivity.
label
Nov 21, 2024
This pull request was exported from Phabricator. Differential Revision: D66305614 |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3104 +/- ##
==========================================
+ Coverage 95.43% 95.77% +0.33%
==========================================
Files 495 500 +5
Lines 49956 50316 +360
==========================================
+ Hits 47674 48188 +514
+ Misses 2282 2128 -154 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
mpolson64
added a commit
to mpolson64/Ax
that referenced
this pull request
Nov 22, 2024
Summary: configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment. configure_runner is fairly straightforward and just sets experiment.runner configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric. Differential Revision: D66305614
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment.
configure_runner is fairly straightforward and just sets experiment.runner
configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric.
Differential Revision: D66305614