Multi-fidelity BO tutorial doesn't work? #2523
Replies: 1 comment 5 replies
-
Yes, this is correct. MFKG avoids evaluating the high fidelities due to the high cost. So your analysis about how well it does needs to be based on the points it would recommend at each step (not what it evaluates as part of the optimization). Take a look at the ["Make a final recommendation¶"](https://botorch.org/tutorials/discrete_multi_fidelity_bo#Make-a-final-recommendation) section of the tutorial. You'll want to call This is often referred to as considering "inference regret" rather than "simple regret" - the former is based on what the model believes is best, the latter is simply based on the points that have been evaluated so far. |
Beta Was this translation helpful? Give feedback.
-
I am going off of the multi-fidelity BO tutorial, which compares multi-fidelity knowledge gradient (KG) with the traditional expected improvement (EI) on just the highest fidelity. At the end, I plot the optimization curves as a function of cost (in the case of multi-fidelity BO, only tracking the highest-fidelity evaluations for the best value so far), but it seems that single-fidelity EI outperforms multi-fidelity KG. Is this not the right way to track this?
Edit:
I just looked at the output of an earlier cell in the notebook where it makes a final recommendation from the multi-fidelity KG, and the output is the following:
The objective value is better than it had found during optimization or what EI was able to find. So somehow the GP "knows" about the best objective, but never actually evaluates it? Am I interpreting this correctly?
Beta Was this translation helpful? Give feedback.
All reactions