Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove deprecated FixedNoiseGP #2536

Closed
wants to merge 2 commits into from

Conversation

Balandat
Copy link
Contributor

Summary: This model has been deprecated for a while and replaced by SingleTaskGP.

Differential Revision: D62680578

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Sep 14, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62680578

Copy link

codecov bot commented Sep 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.98%. Comparing base (c895a8d) to head (750d7f8).
Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2536      +/-   ##
==========================================
- Coverage   99.98%   99.98%   -0.01%     
==========================================
  Files         193      193              
  Lines       17000    17001       +1     
==========================================
  Hits        16998    16998              
- Misses          2        3       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

…2532)

Summary:
Pull Request resolved: pytorch#2532

Makes models which had their priors updated in pytorch#2507 use the `Standardize` outcome transform by default, mimicking pytorch#2458

Also removes some deprecated functionality in the process, namely the `data_fidelity` argument to `SingleTaskMultiFidelityGP`, as well as the `FixedNoiseMultiFidelityGP` and `FixedNoiseLCEMGP` models.

Differential Revision: D62552307

Reviewed By: saitcakmak, esantorella
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62680578

Balandat added a commit to Balandat/botorch that referenced this pull request Sep 14, 2024
Summary:
Pull Request resolved: pytorch#2536

This model has been deprecated for a while and replaced by `SingleTaskGP`.

Differential Revision: D62680578
Summary:
Pull Request resolved: pytorch#2536

This model has been deprecated for a while and replaced by `SingleTaskGP`.

Differential Revision: D62680578
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62680578

Comment on lines +365 to +371
outcome_transform: An outcome transform that is applied to the training
data during instantiation and to the posterior during inference.
NOTE: If this model is trained in minibatches, an outcome transform
with learnable parameters (such as `Standardize`) would update its
parameters for each minibatch, which is undesirable. If you do intend
to train in minibatches, we recommend you not use an outcome transform
and instead pre-transform your whole data set before fitting the model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding these docstrings!

Good to be explicit about defaults now that many models have Standardize by default.

Suggested change
outcome_transform: An outcome transform that is applied to the training
data during instantiation and to the posterior during inference.
NOTE: If this model is trained in minibatches, an outcome transform
with learnable parameters (such as `Standardize`) would update its
parameters for each minibatch, which is undesirable. If you do intend
to train in minibatches, we recommend you not use an outcome transform
and instead pre-transform your whole data set before fitting the model.
outcome_transform: An outcome transform that is applied to the training
data during instantiation and to the posterior during inference. By
default, no outcome transform is used. It is not recommended to use
an outcome transform if this model is trained in minibatches, because
an outcome transform
with learnable parameters (such as `Standardize`) would update its
parameters for each minibatch, which is undesirable. If you do intend
to train in minibatches, we recommend you not use an outcome transform
and instead pre-transform your whole data set before fitting the model.

@@ -51,8 +51,8 @@ def __init__(
embs_dim_list: Optional[list[int]] = None,
output_tasks: Optional[list[int]] = None,
all_tasks: Optional[list[int]] = None,
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT,
*,
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT,

@@ -247,62 +249,3 @@ def construct_inputs(
if embs_dim_list is not None:
base_inputs["embs_dim_list"] = embs_dim_list
return base_inputs


class FixedNoiseLCEMGP(LCEMGP):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's mention this in the summary or title

@@ -254,6 +255,11 @@ def __init__(
task_feature=task_feature,
output_tasks=output_tasks,
rank=rank,
# We already transformed the data above, this avoids applying the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

training data during instantiation and to the posterior during
inference (that is, the `Posterior` obtained by calling
`.posterior` on the model will be on the original scale).
training data during instantiation and to the posterior during
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for fixing up all this documentation!

@@ -65,7 +66,7 @@ def __init__(
Callable[[torch.Size, int, list[int]], Kernel]
] = None,
likelihood: Optional[Likelihood] = None,
outcome_transform: Optional[OutcomeTransform] = None, # TODO
outcome_transform: Optional[Union[OutcomeTransform, _DefaultType]] = DEFAULT,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
outcome_transform: Optional[Union[OutcomeTransform, _DefaultType]] = DEFAULT,
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT,

Comment on lines +189 to +190
Pass down `None` to use no outcome transform. NOTE: Standardization
should be applied in a stratified fashion, separately for each task.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you clarify how one would apply standardization in a stratified fashion, perhaps with an example? Does the default get it right, or should the user manually standardize their data, or do they need to specify the outcome_transform in a certain way?

if m > 1:
if not isinstance(model, BatchedMultiOutputGPyTorchModel):
raise UnsupportedError(
"`get_pvar_expected` only supports `BatchedMultiOutputGPyTorchModel`s."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"`get_pvar_expected` only supports `BatchedMultiOutputGPyTorchModel`s."
"`get_pvar_expected` only supports `BatchedMultiOutputGPyTorchModel`s when m>1."

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 18eb95a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants