Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement configure_runner, configure_metric #3104

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

mpolson64
Copy link
Contributor

Summary:
configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment.

configure_runner is fairly straightforward and just sets experiment.runner

configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric.

Differential Revision: D66305614

Summary:

Implements new Client methods `configure_experiment`, `configure_optimization`, and `configure_generation_strategy`.

Creates new module api.utils.instantiation that holds functions for converting from Configs to core Ax objects. These functions do not do validation, which will live on the configs themselves and be implemented in a separate diff.

Note that this diff also does not implement saving to DB, although this will happen after each of these three methods are called if a config is provided

**Id especially like comment on our use of SymPy** here to parse through objective and constraint strings -- what we've wound up with is much less verbose and I suspect much less error prone than what exists now in InstantiationBase while also providing a more natural user experience (ex. not getting tripped up by spacing, automatically handling inequality simplification like `(x1 + x2) / 2 + 0.5 >= 0` --> `-0.5 * x1 - 0.5 * x2 <= 1`, etc.) without any manual string parsing on our end at all. Im curious what people think of this strategy overall. SymPy usage occurs in `_parse_objective`, `_parse_parameter_constraint`, and `_parse_outcome_constraint`.

Specific RFCs:
* We made the decision earlier to combine the concepts of "outcome constraint" and objective thresholds into a single concept to make things clearer for our users -- do we still stand by this decision? Seeing it in practice I think it will help our users a ton but I want to confirm this before we get too far into implementation
* We discussed previously that if we were using strings to represent objectives we wanted users to be able to specify optimization direction via coefficients (ex objective="loss" vs objective="-loss") **but we did not decide which direction a positive coefficient would indicate**. In this diff Ive implemented things such that a positive coefficient indicates minimization but Im happy to change -- I dont think one is better than the other we just need to be consistent.
* To express relative outcome constraints, rather than use "%" like we do in AxClient, we ask the user multiply their bound by the term "baseline" (ex. "qps >= 0.95 * baseline" will constrain such that the QPS is at least 95% of the baseline arm's qps). To be honest we do this to make things play nice with SymPy but I also find it clearer, though Im curious what you all think

Differential Revision: D65826204
Summary:

These methods are not strictly speaking "part of the API" but may be useful for developers and trusted partners. Each is fairly self explanatory.

Differential Revision: D66304352
Summary:
configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment.

configure_runner is fairly straightforward and just sets experiment.runner

configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric.

Differential Revision: D66305614
@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Nov 21, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66305614

@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 94.96021% with 19 lines in your changes missing coverage. Please review.

Project coverage is 95.77%. Comparing base (4aa1d65) to head (c7b8aeb).

Files with missing lines Patch % Lines
ax/preview/api/client.py 81.81% 12 Missing ⚠️
ax/preview/api/utils/instantiation/from_string.py 94.11% 4 Missing ⚠️
ax/preview/api/utils/instantiation/from_config.py 94.23% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3104      +/-   ##
==========================================
+ Coverage   95.43%   95.77%   +0.33%     
==========================================
  Files         495      500       +5     
  Lines       49956    50316     +360     
==========================================
+ Hits        47674    48188     +514     
+ Misses       2282     2128     -154     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

mpolson64 added a commit to mpolson64/Ax that referenced this pull request Nov 22, 2024
Summary:

configure_runner and configure_metric allow users to attach custom Runners and Metrics to their experiment.

configure_runner is fairly straightforward and just sets experiment.runner

configure_metric is more complicated: given a list of IMetrics it iterates through and tries to find a metric with the same name somewhere on the experiment. In order it checks the Objective (single, MOO, or secularized), outcome constraints, then tracking metrics. If no metric with a matching name is found then the provided metric is added as a tracking metric.

Differential Revision: D66305614
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants