Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In 'best_point' don't require GeneratorRun to have best_arm_predictions to predict from model #2767

Closed
wants to merge 3 commits into from

Conversation

esantorella
Copy link
Contributor

Summary:
Context:

get_best_parameters_from_model_predictions_with_trial_index will only predict from a model if there are best_arm_predictions on the GeneratorRun. This doesn't make sense, since it's about to construct and fit a new model and use it to generate predicts. Any existing best_arm_predictions are not used.

This PR:

  • Removes the gr.best_arm_predictions is not None check
  • Changes how some imported functions are referenced in best_point_mixin.py (doesn't change functionality)

Reviewed By: mpolson64

Differential Revision: D62594017

esantorella and others added 3 commits September 12, 2024 13:44
Summary:
* Add `BenchmarkProblem.get_oracle_experiment_from_params`, a method to compute an experiment where parameters are evaluated at oracle values. This will be useful once we enable inference regret.
* Add a helper `get_oracle_experiment_from_experiment` to replicate the old behavior of `get_oracle_experiment`.
* Remove `get_opt_trace` from `BenchmarkProblem` and absorbe that logic int `benchmark_replication`; once we have inference regret enabled, how we compute the trace should depend on the _method_, not the problem. The problem should only be responsible for computing oracle values given a parameterization.
* Arc lint

Differential Revision: D62250058
Summary:
Context: There used to be more `BenchmarkProblem` subclasses, and they used to implement their own '__repr__' methods, so there were tests for the custom repr methods. Now `BenchmarkProblem` and its subclass `SurrogateBenchmarkProblem` get their `__repr__` methods from being data classes. These tests have become annoying because they break with any change to `BenchmarkProblem, even if just changing the order of arguments.

This PR:
* Removes two `test_repr` methods.

Differential Revision: D62518032
…ns to predict from model

Summary:
Context:

`get_best_parameters_from_model_predictions_with_trial_index` will only predict from a model if there are `best_arm_predictions` on the `GeneratorRun`. This doesn't make sense, since it's about to construct and fit a new model and use it to generate predicts. Any existing `best_arm_predictions` are not used.

This PR:
* Removes the `gr.best_arm_predictions is not None` check
* Changes how some imported functions are referenced in `best_point_mixin.py` (doesn't change functionality)

Reviewed By: mpolson64

Differential Revision: D62594017
@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Sep 13, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62594017

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 2fc80f1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants