Discrete Multi-Objective BO with an Outcome Constraint #2777
Replies: 1 comment 2 replies
-
We do have https://github.com/pytorch/botorch/blob/bc4b0c6cee13b25e4a8fb2283d29ea69be1ae803/botorch/optim/optimize.py#L1393
Hmm not sure why this isn't working properly. One hypothesis would be that as the sum of the inputs is both an objective and a constraint, the HV computation will push very hard to increase all features, and if the scale of that is much larger than the penalty term imposed by the constraint feasibility weighting (which is an approximation) then it seems plausible that increasing the sum increases the weighted objective more than the constraint feasibility estimate penalizes it. You could try playing with the
No, if you have an analytic function of the constraint you shouldn't model that with a GP. One way would be to just use a One general comment I have is that if you have a discrete space with relatively few values for each parameter, then using the optimize-then-round approach can end up working quite poorly. So if you can just solve the actual discrete optimization problem I would recommend that. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I am using BoTorch to perform multi-objective Bayesian optimization for the following problem. There are seven input features, constrained to values between [0,
max
] in increments of 0.5, wheremax
is typically in the range [6-10]. There are two outputs or objectives. One is a black box function which is modeled with a GP that takes as input the seven input features. The second objective is simply the sum of the seven input features. For our optimization problem we are trying to maximize both of the objectives. Additionally, there is an outcome constraint on the second (summation) objective that it must be less than themax
value. I have a couple questions:1). Context: With the discrete increments for the candidates, I found using the optimize_acqf_discrete() to be very computationally expensive. For reference, for
max
= [6,8,10] there are [50k, 245k, 888k] choices. This approach was attractive to me because the outcome constraint is inherently obeyed since I have already constrained the set of possible choices. Within optimize_acqf_discrete() I was able to increase the max_batch_size up to 2048 before I encountered a CUDA OOM error.Question: Do you have any recommendations for efficiently running MOBO with a (large) discrete set of choices?
2.) Context: Instead of using the discrete approach, I tried to operate in a continuous space and then round the chosen candidates to 0.5 increments. Additionally, I need to impose the outcome constraint. I did this within the qLogNoisyExpectedHypervolumeImprovement() acquisition function definition with the following code:
However, it seems that based on the candidates chosen at each iteration, this constraint is not being obeyed. Instead the model is choosing candidates with each of the seven features close to the maximum, leading to the sum feature being ~7*max.
Question: Why is the constraint not being strictly adhered to? Is this the correct way to impose the outcome constraint for my problem? Should I even be modeling this second summation feature with a GP model since it isn't really a black box function?
Let me know if you need additional clarifications to understand my problem or questions. Thank you for all of your work developing BoTorch and supporting this community :-)
Beta Was this translation helpful? Give feedback.
All reactions