-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Safe optimization in the Service API #2563
Comments
Hi @Abrikosoff, I am guessing that @Balandat intended to the constraint model to be pretrained outside of Ax. If you wanted to use non-linear constraints with scipy, you could implement a custom Acquisition that has a different optimize function that constructs the right non-linear constraint from the fitted model. An alternative would be to create a new acquisition function that constructs and uses the probabilistic constraint. e.g. EI weighted by the probability that the probabilistic constraint is satisfied. One way to do this would be to make a subclass of (Log)EI that creates the necessary constraint within construct inputs similar to this. You could then use this acquisition function in a GenerationStrategy that uses it. Parts 3b and 5 of this tutorial show how to do this. |
Hi Sam, thanks a lot for the reply! Actually currently what I'm doing is defining nonlinear constraints and passing them to a GenerationStrategy, something like the following: local_nchoosek_strategy = GenerationStrategy(
steps=[
GenerationStep(
model=Models.SOBOL,
num_trials=num_sobol_trials_for_nchoosek, # https://github.com/facebook/Ax/issues/922
min_trials_observed=min_trials_observed,
max_parallelism=max_parallelism,
model_kwargs=model_kwargs,
),
GenerationStep(
model=Models.BOTORCH_MODULAR,
num_trials=-1,
model_gen_kwargs={
"model_gen_options": {
"optimizer_kwargs": {
"nonlinear_inequality_constraints": [_ineq_constraint],
"batch_initial_conditions": batch_initial_conditions,
}
}
},
),
]
) which I can then pass to my AxClient. My initial idea was to pass |
Yes that's right. If you need a trained model from Ax, using data collected during the experiment, I would recommend going with one of the two approaches that I mentioned, since then you would have access to the trained model. |
Hi Sam @sdaulton , once again thanks for your reply! I'm preparing to try your alternative suggestion (subclassing LogEI), and I have a few related questions regarding this:
Once again, thanks a lot for taking time out to help! |
You'd want to construct a subclass of qLogEI that takes in, for example, a list of
Yes that's right.
No Yes, but you'd need to specify which outcomes to use for the probabilistic constraints. |
Hi Ax Team,
I am trying to implement a Service API version of the safe optimization idea floated by @Balandat here; so far I've come up with a snippet of the form
But here I am stuck as I'm not sure how to retrieve the current fitted model, since I'm thinking of passing
probs_constraint
as anonlinear_inequality_constraint
in aGenerationStrategy
. Any ideas?The text was updated successfully, but these errors were encountered: