Stopping criteria for optimisation #1389
-
I am trying to use Knowledge Gradient acquisition function to tweak and run a Navier Stokes solver. My cost function (simulation wall-time) has an error which is typically around 1%. My goal is to minimise the cost function untill I am fairly certain that the theoretical best candidate in the domain and the best candidate generated after X iterations are within 1% relative difference of each other. Whenever I use Bayesian Optimisation to get good candidates, I run into the following uncertainties and questions:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
By this do you mean the random seed of the optimization algorithm (rather than the random seed of your solver and/or initial conditions)? One reassuring thing is that the costs are similar - it may well be possible that there are multiple local minima with cost very similar to that of the global minimum. If that's the case then the fact that the optimizers between the seeds are different isn't really anything of concern. And whether a particular model is "better" than the other - that's really something you will have to assess, that's not something the algorithm can help with (all it sees is the cost function). Are there other metrics beyond the simulation wall time that are of interest?
Same point as above - if the response surface of the black box function you're optimizing here is multimodal with local minima of very similar performance, then the fact that the candidates themselves aren't replicated between rounds isn't necessarily an issue.
That's a great question. Since we have an underlying probabilistic surrogate model, we can reason about these things. Since everything is probabilistic, the model will never be certain that such candidate cannot be found. But you could work with a probability threshold, say you want this to hold with probability at least |
Beta Was this translation helpful? Give feedback.
By this do you mean the random seed of the optimization algorithm (rather than the random seed of your solver and/or initial conditions)? One reassuring thing is that the costs are similar - it may well be possible that there are multiple local minima with cost very similar to that of the global minimum. If that's the case then the fact that the optimizers between the seeds are different isn't really anything of concern. And whether a particular model is "better" th…