-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GENERAL SUPPORT]: Clarification on Using GenerationStep for BO_MODULAR Model #3310
Comments
Hi Dan, thanks for opening the question. Sorry to hear about the intermittently inaccessibility, we've just recently been launching our new websites so what you are encountering could be related to that. To answer your question, the "Modular BoTorch Model" tutorial should stay up and available, and is a good resource for getting BoTorch Modular working w/ Generation Step. This section outlines the typical surrogate models and acquisition functions https://ax.dev/docs/tutorials/modular_botax.html#appendix-2-default-surrogate-models-and-acquisition-functions
|
Hi @NooriDan! I'm curious why you are configuring your own |
Thanks for both of your replies! I'm working on an open-source tool to help analog circuit designers optimize their designs and I've seen many papers using BO in different flavours such as multi-fidelity, truncated-subspace sampling, and some acqf variations. I've tried calling I've gotten some good result with just BOTORCH_MODULAR. I was hoping to expose the just right amount of customizability to circuit designers running the code. The problem with these circuits is that they could be high-dimensional (10-25 or sometimes 25+ parameters), and each SPICE simulation could take any time from fraction of a second to multiple hours to run depending on the level of simulation accuracy and circuit complexity. We're trying to compare the effectiveness (time savings) of BO compared to Evolutionary Algorithms and Reinforcement Learning.
So this is a useful resources but I was hoping to simply customize the models using GenerationStrategy and GenerationStep arguments. Thanks again for maintaining Ax! This is very helpful for scientist and engineers :) |
@saitcakmak can tell you more, but Ax usually dispatches to the optimal BO configuration for your search space under the hood, and that dispatch is based on extensive research and benchmarking. It's unlikely that you actually need to configure BOTORCH_MODULAR too much or have your users do so; chances are, you can just let Ax make the choice. We are about to release a major upgrade to our dispatch layer, at which point you really should see Ax selecting the right models –– @saitcakmak can update us on the planned release time for that. |
This is great news!
|
|
The example I was working on has 7 range parameters on log scale from 1 to 1000, and there's only one metric I was optimizing for. But, in analog circuit design, problems could be high-dimensional (10-25 or sometimes 25+ parameters), and each SPICE simulation could take any time from fraction of a second to multiple hours to run depending on the level of simulation accuracy and circuit complexity. I thought SAASBO would work better and faster with high dimensional problems. Is it not the case?
oops... yes, trial is the correct term in Ax :) currently I have used 10 Sobol trials and a maximum of 90 BO trials. The SAABO fails after 5 generation (so on 15th trial including the 10 trials from Sobol) |
I think better, but not faster. There might be an alternative model we could set you up with called
Hmm this does seem to be very soon. Fails how? |
Fails as in the trial doesn't progress for more than 5 minutes whereas anything before this finishes in less a couple of seconds. So i just interrupt the optimization loop. |
@NooriDan hmm that doesn't sound like an actual failure, would be good to hear about the actual runtime if you don't interrupt the execution. |
I can add a bit more to Lena's answers. Rather than trying to pick the "optimal" method for each setting, our dispatching logic is aimed to deliver a balanced and well performing method out of the box. For example, SAASBO often performs better in high dimensional problems but, as you've noticed, it can get quite slow. We've instead updated the default On the acquisition function side, we use qLogNEI & qLogNEHVI (single / multi objective) by default. We've found these acquisition functions to deliver good performance across the board. There are other options in the literature like UCB (cheaper, may not perform as well) and KnowledgeGradient (significantly more expensive), which are implemented in BoTorch and can be used with the Modular BoTorch setup.
You can refer to this discussion and the paper linked there that shows consistent performance across different search space dimensionalities. We have other models specifically designed for high dimensional settings (like MAP SAAS mentioned above), however the performance gap is quite small. We've found the default model to work more consistently across different settings.
While completing trials in Ax, you can provide a
SAASBO model uses fully-Bayesian inference with NUTS. This can be quite slow but it tends to perform quite well in high dimensional problems with small experiment budgets. It is recommended when the evaluations of the objective function is expensive. We often optimize problems where the evaluations range from multiple hours to days, where SAASBO is the model of choice. If the evaluations take a few minutes, it is likely not the right choice.
BO with Gaussian Process models (main methods in Ax) can get quite slow as the number of observations increases. The cost of GP inference scales cubically in the number of observations. The typical optimization budget we work with is less than 100, but these methods (perhaps with some custom settings) can be used up to a couple thousand evaluations as well. They will get slower, so it not be the best choice if the function evaluations are relatively cheap. |
Thanks @saitcakmak and @lena-kashtelyan for your awesome explanation. Two questions...
The simulations could take up to a several days in the final stage of VLSI system design, however in some preliminary design steps they could take several hours or only several minutes. Do you have any resources for setting these custom settings to use GPs with higher evaluation counts? Lastly, how robust is BO to local optima in multimodal objective functions, for example, compared evolutionary algorithms? |
We're planning to include options with Ax 1.0 that'll let you choose between cheaper and slower but more performant options. However, this work is still in progress. In the current Ax version, you could pick between the default
In the typical evaluation budgets we work with, evolutionary algorithms don't tend to work that well. They typically require many more evaluations. I don't have references on hand but you can check out some of the recent literature for this. |
Question
Hi,
I am trying to configure the BO_MODULAR model using GenerationStep but need some clarification. Additionally, I have noticed that the documentation has been intermittently inaccessible over the past few days (it often displays "page not found"), which I suspect might be due to a recent update.
Specifically, I would like to know:
What surrogate models are available for BO_MODULAR?
What acquisition functions can be used with it?
Here is the configuration I am currently using:
Please provide any relevant code snippet if applicable.
Code of Conduct
The text was updated successfully, but these errors were encountered: