Replies: 2 comments 2 replies
-
Hi @hvarfner, Thank you very much for making this change - I almost always add the LogNormal prior from your paper, along with I notice that the HOGP covariance module is unchanged; is this intentional? I think the release notes for 0.12.0 imply that it now uses the dimension-scaled prior. botorch/botorch/models/higher_order_gp.py Lines 252 to 257 in ad9978f |
Beta Was this translation helpful? Give feedback.
-
I'm finding in some cases that I get better performance with I appreciate that this would pollute the API a bit, as a lot of models would now have an extra |
Beta Was this translation helpful? Give feedback.
-
Dimension-scaled Lengthscale Prior
The conventional Scale-Matern kernel with a$p(\ell) \sim \Gamma(3, 6)$ lengthscale prior yields good results on low-dimensional problems, but becomes increasingly local as the dimensionality of the problem increases due to its emphasis on short lengthscales. To make the out-of-the-box performance of the BoTorch models less susceptible to problem dimensionality, we are replacing the default covariance module with an RBF kernel that utilizes a LogNormal prior: $p(\ell) \sim \mathcal{LN}(\sqrt{2} + \log{\sqrt{D}}, \sqrt{3})$ that scales with the dimensionality of the problem, as proposed by Hvarfner et. al [1].
Outputscale and Noise Prior Changes
The change also replaces the noise prior of the GaussianLikelihood with a LogNormal prior that prefers lower values, fixes the outputscale (to 1) and switches the default kernel from Matern to RBF, all proposed in [1]. An argument
use_rbf_kernel
has been added to the new helperget_covar_module_with_dim_scaled_prior
inmodels/utils/gpytorch_modules
for users who wish to revert some of the change. The old covariance module is still available through theget_matern_kernel_with_gamma_prior
helper. Similarly, the new default noise prior is available throughget_gaussian_likelihood_with_lognormal_prior
and the old one throughget_gaussian_likelihood_with_gamma_prior
.Planned Further Changes
Since the new covariance module does not include an outputscale parameter, we expect the model performance to be more sensitive to standardization of the training outcomes. In order to alleviate this issue and further improve the reliability of default BoTorch models, we will start utilizing the Normalize and Standardize transforms by default. This change will automate one of the BoTorch best practices and simplify the setup required for utilizing the BoTorch models.
Supporting Results
In the attached image, the density of the old (blue) and the new priors are displayed for various dimensionalities. The new lengthscale priors are broader, and generally prefer larger values. The new noise prior prefers lower values, but still suggests a moderate level of noise.
In addition to the results in the paper, we ran a large benchmark study to evaluate the new covariance and likelihood modules against the previous defaults, the SAASBO model, and some other variants we utilize internally. We found the new default, and variations of it, to perform near the top across a wide range of benchmark problems, including synthetic test functions and benchmarks built using internal datasets.
[1] Vanilla Bayesian Optimization Performs Great in High Dimensions. Carl Hvarfner, Erik Orm Hellsten, Luigi Nardi. International Conference on Machine Learning, 2024. URL: https://arxiv.org/abs/2402.02229
Beta Was this translation helpful? Give feedback.
All reactions