Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can we ensure the outcome constraint not violated? #809

Closed
yuanquan010 opened this issue Feb 15, 2022 · 10 comments
Closed

How can we ensure the outcome constraint not violated? #809

yuanquan010 opened this issue Feb 15, 2022 · 10 comments
Assignees
Labels
question Further information is requested

Comments

@yuanquan010
Copy link

yuanquan010 commented Feb 15, 2022

Hello, thanks for the great project.
I am trying to use Service API, and when I use ax_client.create_experiment(), I set some outcome_constraints like that:
image
However, in the results of exp_to_df, we can find the outcome_constraints are often violated.
image
How can solve this problem?
In the used ax_client, I find the outcome_constraints is only a list of constraints,
image
But it seems that there are some ways to ensure the outcome_constraints are not violated.
image
image

Can we use the parameter 'relative' or other ways to ensure the outcome_constraints are not violated in the Service API?

@danielcohenlive danielcohenlive added the question Further information is requested label Feb 15, 2022
@danielcohenlive
Copy link

@yuanquan010 thanks for the question! The problem may just be around the concept of what outcome constraints do. From a high level, when you run a trial on an Ax experiment, it has arms with a dict of parameter values (inputs) which map to metric values (results). The client then iterates on the previous findings to arrive at better and better parameter values. An outcome constraint doesn't set what metric values running a given arm should yield, but sets what metric values are considered good for the next time you iterate. When you access exp_to_df you are just seeing previous results, both good and bad.

I don't necessarily think you want relative=True. That would make things relative to a status quo arm, so out_const1 would only be good if it was <= status_quo.out_const1 + 1.1939 (just semi-code). So I don't necessarily see anything wrong with the way the client is working. 2_0 and 6_0 look like the only arms that the experiment managed to satisfy the constraints.

But, if you are saying you believe the ax client is iterating on arms that violate constraints as if they were good there might be a problem. It is hard to say if the client is malfunctioning though without knowing more about the rest of the optimization config and what kind of experiment you're running (how much uncertainty there is), if the parameters you're using significantly affect the metrics, etc... If you do think it's predicting incorrectly, definitely provide us with more details about the experiment (redacting any sensitive data) so we can help you.

@danielcohenlive danielcohenlive self-assigned this Feb 15, 2022
@EugenHotaj
Copy link
Contributor

@yuanquan010 to add a bit more to what @danielcohenlive said above, Ax (and BoTorch) probabilistically model the outcome constraints so it's not guaranteed that the constraints won't be violated, especially early in the optimization when we don't know much about the true function. (In fact, it's impossible to guarantee this in the general case since this would require knowing how the true function behaves).

However, methods like get_best_trial should take into account outcome constraints and should not return infeasible points.

@yuanquan010
Copy link
Author

yuanquan010 commented Feb 16, 2022

@danielcohenlive Thanks a lot for your reply.
I am trying to achieve the 4 parameter constraints like a1 * b1 <= C, but these types of constraints are not supported, so I use numpy.log() to convert them into the form of sum. I do not know how to achieve that by using parameter_constraints but based on the example in the tutorial, outcome_constraints can achieve this type of conversion. Except for changing the parameter a1 into np.log(a1) directly, would you like to tell me other ways to achieve the constraints a1*b1<=C?
The following is my code:

import os
import numpy as np
ax_client.create_experiment(
    name= "ETC_optimization_3D",
    parameters=[
        {
            "name": "a1",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "b1",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "theta11",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta21",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta31",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "a2",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "b2",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "theta12",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta22",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta32",
            "type": "range",
            "bounds": [0.0, np.pi],
        }, 
        {
            "name": "a3",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "b3",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "theta13",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta23",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta33",
            "type": "range",
            "bounds": [0.0, np.pi],
        },  
        {
            "name": "a4",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "b4",
            "type": "range",
            "bounds": [1.79, 1.85],
            "value_type": "float", # Optional, defaults to inference from type of "bounds".
            "log_scale": False, # Optional, defaults to False.
        },
        {
            "name": "theta14",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta24",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
        {
            "name": "theta34",
            "type": "range",
            "bounds": [0.0, np.pi],
        },
    ],
    objective_name="ETC",
    minimize=True, # Optional, defaults to False.
    # parameter_constraints=["x1 + x2 <= 2.0"], # Optional
    outcome_constraints=[f"out_cons1 <= 1.1939",f"out_cons2 <= 1.1939",f"out_cons3 <= 1.1939",f"out_cons4 <= 1.1939"], # Optional.
    )
def evaluate(parameters):
    x = np.array([parameters.get(f"a1"), parameters.get(f"b1"), parameters.get(f"theta11"), parameters.get(f"theta21"), parameters.get(f"theta31"), 
                  parameters.get(f"a2"), parameters.get(f"b2"), parameters.get(f"theta12"), parameters.get(f"theta22"), parameters.get(f"theta32"), 
                  parameters.get(f"a3"), parameters.get(f"b3"), parameters.get(f"theta13"), parameters.get(f"theta23"), parameters.get(f"theta33"), 
                  parameters.get(f"a4"), parameters.get(f"b4"), parameters.get(f"theta14"), parameters.get(f"theta24"), parameters.get(f"theta34"), ])
    np.savetxt('parameter.csv', x, delimiter=',')
    run_filename = 'Ellipsoid.py'
    command = 'abaqus cae noGUI' + '=%s' % run_filename
    os.system(command)
    xx = np.loadtxt('ETC.csv') 
    # In our case, standard error is 0, since we are computing a synthetic function
    return {"ETC": (xx[24]), "out_cons1":(np.log(x[0])+np.log(x[1])),"out_cons2":(np.log(x[5])+np.log(x[6])),"out_cons3":(np.log(x[10])+np.log(x[11])),"out_cons4":(np.log(x[15])+np.log(x[16]))}

@yuanquan010
Copy link
Author

@yuanquan010 to add a bit more to what @danielcohenlive said above, Ax (and BoTorch) probabilistically model the outcome constraints so it's not guaranteed that the constraints won't be violated, especially early in the optimization when we don't know much about the true function. (In fact, it's impossible to guarantee this in the general case since this would require knowing how the true function behaves).

However, methods like get_best_trial should take into account outcome constraints and should not return infeasible points.

Thank you, I got it

@danielcohenlive
Copy link

You're right, this form of parameter constraint ("p1 * p2 < C") is not yet supported. It looks like it could be in the future though, so thanks for pointing this out! For now, what I might do is modify the evaluation loop part of your code to look like:

def satisfies_product_param_constraint(params):
   return params["a1"] * params["b1"] < 1.45  # "a1 * b1 <= C"
 
i = 0
while i < 25:
    parameters, trial_index = ax_client.get_next_trial()
    if satisfies_product_param_constraint(parameters):
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
    else:
        ax_client.abandon_trial(trial_index, reason="Violates product constraint")  # `reason` is optional 

You can then leave the rest of what you had in your comment. I realize this isn't the cleanest solution but it should work

@yuanquan010
Copy link
Author

Hey, thank you for your advice.
I tried it, but the point is that I need to implement four constraints like that.
The following is my code, but my question is that will this way affect the function of Sobol sampling? Because we hope for a uniform Sobol sampling, will these constraints lead to an unrepresentative sampling space?

def evaluate(parameters):
    x = np.array([parameters.get(f"a1"), parameters.get(f"b1"), parameters.get(f"theta11"), parameters.get(f"theta21"), parameters.get(f"theta31"), 
                  parameters.get(f"a2"), parameters.get(f"b2"), parameters.get(f"theta12"), parameters.get(f"theta22"), parameters.get(f"theta32"), 
                  parameters.get(f"a3"), parameters.get(f"b3"), parameters.get(f"theta13"), parameters.get(f"theta23"), parameters.get(f"theta33"), 
                  parameters.get(f"a4"), parameters.get(f"b4"), parameters.get(f"theta14"), parameters.get(f"theta24"), parameters.get(f"theta34"), ])
    np.savetxt('parameter.csv', x, delimiter=',')
    run_filename = 'Ellipsoid.py'
    command = 'abaqus cae noGUI' + '=%s' % run_filename
    os.system(command)
    xx = np.loadtxt('ETC.csv') 
    # In our case, standard error is 0, since we are computing a synthetic function
    return {"ETC": (xx[24])}
def satisfies_product_param_constraint(params):
    return (params["a1"]*params["b1"] <= 3.3) & (params["a2"]*params["b2"] <= 3.3) & (params["a3"]*params["b3"] <= 3.3) & (params["a4"]*params["b4"] <= 3.3)
for i in range(1000):
    parameters, trial_index = ax_client.get_next_trial()
    # Local evaluation here can be replaced with deployment to external system
    if satisfies_product_param_constraint(parameters):
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameters))
    else:
        ax_client.abandon_trial(trial_index, reason="Violates product constraint")  # `reason` is optional 

image
Also, we can see that only about 30 out of 1000 loops are accepted, its efficiency looks not so high

@EugenHotaj
Copy link
Contributor

@yuanquan010 To take a step back, do you mind explaining a bit more about what you're trying to achieve? Maybe there is a way to reformulate the problem (search space, objective, etc) in a way which is natively supported by Ax.

but my question is that will this way affect the function of Sobol sampling...

I believe rejecting trials should not affect the uniformity of Sobol sampling. Could you expand a bit more on what your concern is here? Are you only using Sobol during the optimization?

Also, we can see that only about 30 out of 1000 loops are accepted...

Yea, depending on how big your infeasible region is, the rejection rate can be quite large. Another potential approach here is to return some large objective value in the infeasible region (if you're minimizing the function), instead of outright abandoning trials.

Finally, if there is no better way to reformulate your problem, we recently added support for non-linear constraints in BoTorch and are working on exposing them in Ax (#794). Note that BoTorch has a lot more overhead than Ax to setting up the optimization.

@yuanquan010
Copy link
Author

yuanquan010 commented Feb 18, 2022

@yuanquan010 To take a step back, do you mind explaining a bit more about what you're trying to achieve? Maybe there is a way to reformulate the problem (search space, objective, etc) in a way which is natively supported by Ax.

but my question is that will this way affect the function of Sobol sampling...

I believe rejecting trials should not affect the uniformity of Sobol sampling. Could you expand a bit more on what your concern is here? Are you only using Sobol during the optimization?

Also, we can see that only about 30 out of 1000 loops are accepted...

Yea, depending on how big your infeasible region is, the rejection rate can be quite large. Another potential approach here is to return some large objective value in the infeasible region (if you're minimizing the function), instead of outright abandoning trials.

Finally, if there is no better way to reformulate your problem, we recently added support for non-linear constraints in BoTorch and are working on exposing them in Ax (#794). Note that BoTorch has a lot more overhead than Ax to setting up the optimization.

Hello, thanks a lot for your reply, it is so helpful.
I am trying to simulate some properties of a solid containing four ellipsoids, and hope to improve the properties by optimizing the parameters of ellipsoids. Here a, b, c are the geometrical parameters of one ellipsoid.
And the constraint is the volume of one ellipsoid constant, which means a * b * c = Constant. In addition, a, b, c should be smaller than 1.85. Here, I just sample a and b, then c can be calculated, and based on the range of c, I set a range for a and b roughly. Can we find other way to reformulate this problem?
image
image
Can we find some better way to reformulate the problem.

@EugenHotaj
Copy link
Contributor

EugenHotaj commented Feb 18, 2022

@yuanquan010 Thanks for the extra info, this is very helpful. Seems like nonlinear constraints would be the natural way to express what you want, but we don't support that in Ax yet.

Your suggestion of working in log space actually makes a lot of sense. The easiest thing to do would probably be to convert/unconvert your parameters outside of Ax. Don't forget to also convert your bounds! E.g.:

ax_client.CreateExperiment(
    ...
    parameters=[
        {
            "name": "a1", 
            "type": "range", 
            "bounds": [np.log(...), np.log(...)], 
            "log_scale": False,  # Set to False because param already in log_scale.
        },
        ...
    ],
    parameter_constraints=["a1 + b1 + c1 <= constant", ...]
    objective_constraints=[...],
    ...
)

def evaluate(params):
    a1 = np.exp(params.get("a1"))  # Convert back from log space before computing objective.
    ...

@yuanquan010
Copy link
Author

yuanquan010 commented Feb 21, 2022

@yuanquan010 Thanks for the extra info, this is very helpful. Seems like nonlinear constraints would be the natural way to express what you want, but we don't support that in Ax yet.

Your suggestion of working in log space actually makes a lot of sense. The easiest thing to do would probably be to convert/unconvert your parameters outside of Ax. Don't forget to also convert your bounds! E.g.:

ax_client.CreateExperiment(
    ...
    parameters=[
        {
            "name": "a1", 
            "type": "range", 
            "bounds": [np.log(...), np.log(...)], 
            "log_scale": False,  # Set to False because param already in log_scale.
        },
        ...
    ],
    parameter_constraints=["a1 + b1 + c1 <= constant", ...]
    objective_constraints=[...],
    ...
)

def evaluate(params):
    a1 = np.exp(params.get("a1"))  # Convert back from log space before computing objective.
    ...

Okay, thank you for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants