Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Objective Thresholds are inferred even though they are specified #802

Closed
mcd01 opened this issue Feb 8, 2022 · 7 comments
Closed

Objective Thresholds are inferred even though they are specified #802

mcd01 opened this issue Feb 8, 2022 · 7 comments

Comments

@mcd01
Copy link

mcd01 commented Feb 8, 2022

Dear all,

this is a follow-up issue on #768 , or at least closely related. Basically, when initializing my multi-objective optimization experiment with some initial data, I observed that specified objective thresholds are not considered, and Ax still tries to infer them. For the sake of simplicity, find below a minimalistic example for reproducability. In this example, the initial data is forcefully configured in a way that it violates all objective thresholds and outcome constraints in order to show the problem. However, one can observe that the thresholds are inferred regardless of "valid" initial data or not.

import torch
import numpy as np
from ax import Models
from ax.modelbridge.generation_node import GenerationStep
from ax.modelbridge.generation_strategy import GenerationStrategy
from ax.models.torch.botorch_modular.list_surrogate import ListSurrogate
from ax.service.ax_client import AxClient
from ax.service.utils.instantiation import ObjectiveProperties
from botorch.acquisition.multi_objective import qNoisyExpectedHypervolumeImprovement
from botorch.models import FixedNoiseGP
from gpytorch.mlls import SumMarginalLogLikelihood

# setup experiment, get initial starting points
generation_strategy: GenerationStrategy = GenerationStrategy(
    steps=[
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1,
            enforce_num_trials=True,
            max_parallelism=1,  # sequential evaluation
            model_gen_kwargs={"fit_out_of_design": True},
            model_kwargs={
                "fit_out_of_design": True,
                "surrogate": ListSurrogate(botorch_submodel_class=FixedNoiseGP,  # GP for objectives + constraint
                                           submodel_options={"train_Yvar": torch.tensor(0.1)},  # noise
                                           mll_class=SumMarginalLogLikelihood),
                "botorch_acqf_class": qNoisyExpectedHypervolumeImprovement
                # MC-based batch Noisy Expected Hypervolume Improvement
            }
        ),
    ]
)


def threshold_adapter(value: float):
    # The reference point should be set to be slightly worse (10% is reasonable)
    return max(value * 1.1, value + np.amax(np.abs(np.random.normal(0, 0.1, 100000))))


ax_client = AxClient(generation_strategy=generation_strategy,
                     verbose_logging=False,
                     enforce_sequential_optimization=True)
ax_client.create_experiment(
    name="experiment",
    parameters=[
        {
            "name": "x0",
            "type": "choice",
            "value_type": "float",
            "values": [2.0, 4.0, 6.0, 8.0]
        },
        {
            "name": "x1",
            "type": "choice",
            "value_type": "float",
            "values": [250.0, 500.0, 750.0, 1000.0]
        },
        {
            "name": "x2",
            "type": "fixed",
            "value_type": "str",
            "value": "TEST"
        }
    ],
    outcome_constraints=[f"constraint <= 5"],  # must obey to constraint
    objectives={
        "objective1": ObjectiveProperties(
            minimize=True,
            threshold=threshold_adapter(5)),
        "objective2": ObjectiveProperties(
            minimize=True,
            threshold=threshold_adapter(10)),
    },
    immutable_search_space_and_opt_config=True,
)

parametrizations = [
    {"x0": 2.0, "x1": 250.0, "x2": "TEST"},
    {"x0": 4.0, "x1": 500.0, "x2": "TEST"},
    {"x0": 8.0, "x1": 1000.0, "x2": "TEST"},
]

results = [
    {"objective1": 10, "objective2": 20, "constraint": 30},
    {"objective1": 15, "objective2": 50, "constraint": 20},
    {"objective1": 20, "objective2": 100, "constraint": 10}
]


def get_noisy_observations(value_dict):
    noise_sd = 0.1
    noise_list = np.random.normal(0, noise_sd, len(value_dict))
    return {k: (v + noise_list[i], noise_sd) for i, (k, v) in enumerate(value_dict.items())}


for params, res in zip(parametrizations, results):
    _, trial_index = ax_client.attach_trial(params)
    ax_client.complete_trial(trial_index=trial_index, raw_data=get_noisy_observations(res))

# enforce problem
_, trial_index = ax_client.get_next_trial()
trial = ax_client.experiment.trials[trial_index]
trial.mark_failed()

# ...

Also, find below the stack trace:

[INFO 02-08 17:25:06] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[ChoiceParameter(name='x0', parameter_type=FLOAT, values=[2.0, 4.0, 6.0, 8.0], is_ordered=True, sort_values=True), ChoiceParameter(name='x1', parameter_type=FLOAT, values=[250.0, 500.0, 750.0, 1000.0], is_ordered=True, sort_values=True), FixedParameter(name='x2', parameter_type=STRING, value=TEST)], parameter_constraints=[]).
Traceback (most recent call last):
  File ".../simulation/methods/snippet.py", line 101, in <module>
    _, trial_index = ax_client.get_next_trial()
  File ".../venv/lib/python3.8/site-packages/ax/utils/common/executils.py", line 147, in actual_wrapper
    return func(*args, **kwargs)
  File ".../venv/lib/python3.8/site-packages/ax/service/ax_client.py", line 355, in get_next_trial
    generator_run=self._gen_new_generator_run(), ttl_seconds=ttl_seconds
  File ".../venv/lib/python3.8/site-packages/ax/service/ax_client.py", line 1344, in _gen_new_generator_run
    return not_none(self.generation_strategy).gen(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/generation_strategy.py", line 330, in gen
    return self._gen_multiple(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/generation_strategy.py", line 471, in _gen_multiple
    generator_run = _gen_from_generation_step(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/generation_strategy.py", line 833, in _gen_from_generation_step
    generator_run = generation_step.gen(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/generation_node.py", line 111, in gen
    return model_spec.gen(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/model_spec.py", line 170, in gen
    return fitted_model.gen(**model_gen_kwargs)
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/base.py", line 674, in gen
    observation_features, weights, best_obsf, gen_metadata = self._gen(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/array.py", line 276, in _gen
    X, w, gen_metadata, candidate_metadata = self._model_gen(
  File ".../venv/lib/python3.8/site-packages/ax/modelbridge/torch.py", line 237, in _model_gen
    X, w, gen_metadata, candidate_metadata = self.model.gen(
  File ".../venv/lib/python3.8/site-packages/ax/models/torch/botorch_modular/model.py", line 272, in gen
    acqf = self._instantiate_acquisition(
  File ".../venv/lib/python3.8/site-packages/ax/models/torch/botorch_modular/model.py", line 450, in _instantiate_acquisition
    return self.acquisition_class(
  File ".../venv/lib/python3.8/site-packages/ax/models/torch/botorch_modular/acquisition.py", line 135, in __init__
    self._objective_thresholds = infer_objective_thresholds(
  File ".../venv/lib/python3.8/site-packages/ax/models/torch/botorch_moo_defaults.py", line 514, in infer_objective_thresholds
    raise AxError("There are no feasible observed points.")
ax.exceptions.core.AxError: There are no feasible observed points.

Thanks in advance for your help!

@lena-kashtelyan
Copy link
Contributor

Thank you for the repro, @mcd01, we will look into this and get back to you!

@lena-kashtelyan lena-kashtelyan added the bug Something isn't working label Feb 8, 2022
@lena-kashtelyan
Copy link
Contributor

Hello @mcd01, we could not reproduce this issue –– on our internal runs objectives are not getting inferred in the code snippet you pasted above. Could you check whether you are on the latest version of Ax?

@lena-kashtelyan lena-kashtelyan added requires repro or more info and removed bug Something isn't working labels Feb 14, 2022
@mcd01
Copy link
Author

mcd01 commented Feb 15, 2022

Hello @lena-kashtelyan , thank you for the reply. I am using version 0.2.3 of the ax-platform, which is, according to my knowledge, the most recent version. Anything else I can provide or help you with?

@lena-kashtelyan
Copy link
Contributor

Hi @mcd01, we'll release a new stable release towards the end of this week and ask you to try that if that's okay, since on latest this issue does not reproduce for us. Stay tuned!

@lena-kashtelyan
Copy link
Contributor

We also did just release a new stable version of Ax –– @mcd01, you should now be good to re-try your code and see if the issue still comes up! Please let us know the outcome when you've had a chance to check.

@mcd01
Copy link
Author

mcd01 commented Mar 30, 2022

Hi @lena-kashtelyan , sorry for the late reply. I am not sure if its of a lot of help, but I did no longer experience these issues when switching from Models.BOTORCH_MODULAR to Models.MOO_MODULAR.
I might have some spare time next week, will then also follow your request and check with the most recent code version. Thanks!

@pcanaran
Copy link
Contributor

Hi @mcd01, thank you for getting back to us. Since you have not since encountered the issue I'll go ahead and close it. If you encounter the same problem and need to follow up in the future, please feel free to reopen the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants