-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can we ensure the outcome constraint not violated? #809
Comments
@yuanquan010 thanks for the question! The problem may just be around the concept of what outcome constraints do. From a high level, when you run a trial on an Ax experiment, it has arms with a dict of parameter values (inputs) which map to metric values (results). The client then iterates on the previous findings to arrive at better and better parameter values. An outcome constraint doesn't set what metric values running a given arm should yield, but sets what metric values are considered good for the next time you iterate. When you access I don't necessarily think you want But, if you are saying you believe the ax client is iterating on arms that violate constraints as if they were good there might be a problem. It is hard to say if the client is malfunctioning though without knowing more about the rest of the optimization config and what kind of experiment you're running (how much uncertainty there is), if the parameters you're using significantly affect the metrics, etc... If you do think it's predicting incorrectly, definitely provide us with more details about the experiment (redacting any sensitive data) so we can help you. |
@yuanquan010 to add a bit more to what @danielcohenlive said above, Ax (and BoTorch) probabilistically model the outcome constraints so it's not guaranteed that the constraints won't be violated, especially early in the optimization when we don't know much about the true function. (In fact, it's impossible to guarantee this in the general case since this would require knowing how the true function behaves). However, methods like |
@danielcohenlive Thanks a lot for your reply.
|
Thank you, I got it |
You're right, this form of parameter constraint ( def satisfies_product_param_constraint(params):
return params["a1"] * params["b1"] < 1.45 # "a1 * b1 <= C"
i = 0
while i < 25:
parameters, trial_index = ax_client.get_next_trial()
if satisfies_product_param_constraint(parameters):
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
else:
ax_client.abandon_trial(trial_index, reason="Violates product constraint") # `reason` is optional You can then leave the rest of what you had in your comment. I realize this isn't the cleanest solution but it should work |
@yuanquan010 To take a step back, do you mind explaining a bit more about what you're trying to achieve? Maybe there is a way to reformulate the problem (search space, objective, etc) in a way which is natively supported by Ax.
I believe rejecting trials should not affect the uniformity of Sobol sampling. Could you expand a bit more on what your concern is here? Are you only using Sobol during the optimization?
Yea, depending on how big your infeasible region is, the rejection rate can be quite large. Another potential approach here is to return some large objective value in the infeasible region (if you're minimizing the function), instead of outright abandoning trials. Finally, if there is no better way to reformulate your problem, we recently added support for non-linear constraints in BoTorch and are working on exposing them in Ax (#794). Note that BoTorch has a lot more overhead than Ax to setting up the optimization. |
Hello, thanks a lot for your reply, it is so helpful. |
@yuanquan010 Thanks for the extra info, this is very helpful. Seems like nonlinear constraints would be the natural way to express what you want, but we don't support that in Ax yet. Your suggestion of working in
|
Okay, thank you for your help. |
Hello, thanks for the great project.
I am trying to use Service API, and when I use ax_client.create_experiment(), I set some outcome_constraints like that:
However, in the results of exp_to_df, we can find the outcome_constraints are often violated.
How can solve this problem?
In the used ax_client, I find the outcome_constraints is only a list of constraints,
But it seems that there are some ways to ensure the outcome_constraints are not violated.
Can we use the parameter 'relative' or other ways to ensure the outcome_constraints are not violated in the Service API?
The text was updated successfully, but these errors were encountered: