-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove deprecated FixedNoiseGP #2536
Conversation
This pull request was exported from Phabricator. Differential Revision: D62680578 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2536 +/- ##
==========================================
- Coverage 99.98% 99.98% -0.01%
==========================================
Files 193 193
Lines 17000 17001 +1
==========================================
Hits 16998 16998
- Misses 2 3 +1 ☔ View full report in Codecov by Sentry. |
…2532) Summary: Pull Request resolved: pytorch#2532 Makes models which had their priors updated in pytorch#2507 use the `Standardize` outcome transform by default, mimicking pytorch#2458 Also removes some deprecated functionality in the process, namely the `data_fidelity` argument to `SingleTaskMultiFidelityGP`, as well as the `FixedNoiseMultiFidelityGP` and `FixedNoiseLCEMGP` models. Differential Revision: D62552307 Reviewed By: saitcakmak, esantorella
This pull request was exported from Phabricator. Differential Revision: D62680578 |
0f862e4
to
4cafd4f
Compare
Summary: Pull Request resolved: pytorch#2536 This model has been deprecated for a while and replaced by `SingleTaskGP`. Differential Revision: D62680578
Summary: Pull Request resolved: pytorch#2536 This model has been deprecated for a while and replaced by `SingleTaskGP`. Differential Revision: D62680578
This pull request was exported from Phabricator. Differential Revision: D62680578 |
4cafd4f
to
750d7f8
Compare
outcome_transform: An outcome transform that is applied to the training | ||
data during instantiation and to the posterior during inference. | ||
NOTE: If this model is trained in minibatches, an outcome transform | ||
with learnable parameters (such as `Standardize`) would update its | ||
parameters for each minibatch, which is undesirable. If you do intend | ||
to train in minibatches, we recommend you not use an outcome transform | ||
and instead pre-transform your whole data set before fitting the model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding these docstrings!
Good to be explicit about defaults now that many models have Standardize
by default.
outcome_transform: An outcome transform that is applied to the training | |
data during instantiation and to the posterior during inference. | |
NOTE: If this model is trained in minibatches, an outcome transform | |
with learnable parameters (such as `Standardize`) would update its | |
parameters for each minibatch, which is undesirable. If you do intend | |
to train in minibatches, we recommend you not use an outcome transform | |
and instead pre-transform your whole data set before fitting the model. | |
outcome_transform: An outcome transform that is applied to the training | |
data during instantiation and to the posterior during inference. By | |
default, no outcome transform is used. It is not recommended to use | |
an outcome transform if this model is trained in minibatches, because | |
an outcome transform | |
with learnable parameters (such as `Standardize`) would update its | |
parameters for each minibatch, which is undesirable. If you do intend | |
to train in minibatches, we recommend you not use an outcome transform | |
and instead pre-transform your whole data set before fitting the model. |
@@ -51,8 +51,8 @@ def __init__( | |||
embs_dim_list: Optional[list[int]] = None, | |||
output_tasks: Optional[list[int]] = None, | |||
all_tasks: Optional[list[int]] = None, | |||
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT, | |
*, | |
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT, |
@@ -247,62 +249,3 @@ def construct_inputs( | |||
if embs_dim_list is not None: | |||
base_inputs["embs_dim_list"] = embs_dim_list | |||
return base_inputs | |||
|
|||
|
|||
class FixedNoiseLCEMGP(LCEMGP): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's mention this in the summary or title
@@ -254,6 +255,11 @@ def __init__( | |||
task_feature=task_feature, | |||
output_tasks=output_tasks, | |||
rank=rank, | |||
# We already transformed the data above, this avoids applying the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
training data during instantiation and to the posterior during | ||
inference (that is, the `Posterior` obtained by calling | ||
`.posterior` on the model will be on the original scale). | ||
training data during instantiation and to the posterior during |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for fixing up all this documentation!
@@ -65,7 +66,7 @@ def __init__( | |||
Callable[[torch.Size, int, list[int]], Kernel] | |||
] = None, | |||
likelihood: Optional[Likelihood] = None, | |||
outcome_transform: Optional[OutcomeTransform] = None, # TODO | |||
outcome_transform: Optional[Union[OutcomeTransform, _DefaultType]] = DEFAULT, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outcome_transform: Optional[Union[OutcomeTransform, _DefaultType]] = DEFAULT, | |
outcome_transform: OutcomeTransform | _DefaultType | None = DEFAULT, |
Pass down `None` to use no outcome transform. NOTE: Standardization | ||
should be applied in a stratified fashion, separately for each task. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify how one would apply standardization in a stratified fashion, perhaps with an example? Does the default get it right, or should the user manually standardize their data, or do they need to specify the outcome_transform
in a certain way?
if m > 1: | ||
if not isinstance(model, BatchedMultiOutputGPyTorchModel): | ||
raise UnsupportedError( | ||
"`get_pvar_expected` only supports `BatchedMultiOutputGPyTorchModel`s." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"`get_pvar_expected` only supports `BatchedMultiOutputGPyTorchModel`s." | |
"`get_pvar_expected` only supports `BatchedMultiOutputGPyTorchModel`s when m>1." |
This pull request has been merged in 18eb95a. |
Summary: This model has been deprecated for a while and replaced by
SingleTaskGP
.Differential Revision: D62680578