Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GENERAL SUPPORT]: Running into user warning exception stating an objective was not 'observed' #2794

Open
1 task done
allen-yun opened this issue Sep 27, 2024 · 12 comments
Open
1 task done
Assignees
Labels
question Further information is requested

Comments

@allen-yun
Copy link

allen-yun commented Sep 27, 2024

Question

Hello,
I'm trying to run a MOBO experiment using the Service API and was running into an issue. It seems like everything is running properly but I'm getting warnings thrown which is making me wonder where the problem is coming from.

The specific message is:

[INFO 09-27 00:05:42] ax.modelbridge.transforms.standardize_y: Outcome x is constant, within tolerance.
[INFO 09-27 00:05:42] ax.modelbridge.transforms.standardize_y: Outcome y is constant, within tolerance.
C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\ax\modelbridge\cross_validation.py:462: UserWarning:

Encountered exception in computing model fit quality: Outcome `x` was not observed.

I've attached my code below for further context. This takes place in a Jupyter Notebook and allows the user to manually input an initial reference trial and the rest of the experiment is done through a user dialog for ten trials. The surrogate is a SingleTaskGP and the Acquisition Function is qNEHVI (not sure if implemented correctly).

Please provide any relevant code snippet if applicable.

gs = GenerationStrategy(
    steps=[
        # Bayesian optimization step using the custom acquisition function
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1,  # No limitation on how many trials should be produced from this step
            model_kwargs={
                "surrogate": Surrogate(SingleTaskGP),
                "botorch_acqf_class": qLogNoisyExpectedHypervolumeImprovement
            },
        ),
    ]
)

ax_client = AxClient(generation_strategy=gs)
ax_client.create_experiment(
    name="lc_optimization",
    parameters=[
        {
            "name": "a",
            "type": "range",
            "bounds": [100, 400],
        },
        {
            "name": "b",
            "type": "range",
            "bounds": [0.3, 500.00],
        },
        {
            "name": "c",
            "type": "range",
            "bounds": [0.0, 10.0],
        },
        {
            "name": "d",
            "type": "range",
            "bounds": [0, 5],
        },
        {
            "name": "e",
            "type": "range",
            "bounds": [0, 5],
        },
        {
            "name": "f",
            "type": "range",
            "bounds": [0.0, 3.0],
        },
        {
            "name": "g",
            "type": "range",
            "bounds": [0.0, 10.0],
        },
    ],
    objectives={
        # `threshold` arguments are optional
        "x": ObjectiveProperties(minimize=True, threshold=1.0),
        "y": ObjectiveProperties(minimize=True, threshold=1.0),
    },
    overwrite_existing_experiment=True,
    is_test=True,
)

initial_trial_parameters = {
    "a": 200,
    "b": 250.0,
    "c": 0.0,
    "d": 1,
    "e": 1,
    "f": 1.5,
    "g": 1.0,
}
initial_trial_results = {"x": (1.5, None), "y": (2.5, None)}

ax_client.attach_trial(initial_trial_parameters)
ax_client.complete_trial(trial_index=0, raw_data=initial_trial_results)

for i in range(10):  # Number of trials
    parameters, trial_index = ax_client.get_next_trial()
    print(f"Trial {i+1}: {parameters}")

    # Pause for user input via dialog
    trial_x = get_user_inputX("Please enter resulting x")
    trial_y = get_user_inputY("Please enter resulting y")

    # Simulate evaluation
    ax_client.complete_trial(trial_index=trial_index, raw_data={"x": (trial_x, None), "y": (trial_y, None)})

Code of Conduct

  • I agree to follow this Ax's Code of Conduct
@allen-yun allen-yun added the question Further information is requested label Sep 27, 2024
@Cesar-Cardoso Cesar-Cardoso self-assigned this Sep 27, 2024
@Cesar-Cardoso
Copy link
Contributor

Hello there! Could you also provide your implementation for get_user_inputX and get_user_inputY? I've replaced them with input and I'm able to iterate successfully without encountering the Encountered exception in computing model fit quality: Outcome x was not observed. error using Ax V0.4.3 and BoTorch V0.12.0.

@allen-yun
Copy link
Author

allen-yun commented Sep 27, 2024

Yes! I attached the two functions below:

def get_user_inputX(prompt):
    root = tk.Tk()
    root.withdraw() 
    user_input = simpledialog.askfloat(title="Resulting x", prompt=prompt)
    root.destroy()
    return user_input

def get_user_inputY(prompt):
    root = tk.Tk()
    root.withdraw() 
    user_input = simpledialog.askfloat(title="Resulting y", prompt=prompt)
    root.destroy()
    return user_input

@Cesar-Cardoso
Copy link
Contributor

Thanks! Your x, y values would be floats here so that wouldn't be the issue. Can you double check that this issue is reproducing for you with the latest versions of Ax & BoTorch using the code snippet you provided above?

If it does, can you also provide the full stack trace of the error?

@allen-yun
Copy link
Author

allen-yun commented Sep 30, 2024

I believe I have the most up-to-date version and the issue is still happening. I can still run my trials fine but I don't know if this will affect optimization.

Here is the trace I'm seeing:

[INFO 09-30 10:48:34] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.INT for parameter a. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter b. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter c. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.INT for parameter d. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.INT for parameter e. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter f. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter g. If that is not the expected value type, you can explicitly specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.

[INFO 09-30 10:48:34] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='a', parameter_type=INT, range=[100, 400]), RangeParameter(name='b', parameter_type=FLOAT, range=[0.3, 500.0]), RangeParameter(name='c', parameter_type=FLOAT, range=[0.0, 10.0]), RangeParameter(name='d', parameter_type=INT, range=[0, 5]), RangeParameter(name='e', parameter_type=INT, range=[0, 5]), RangeParameter(name='f', parameter_type=FLOAT, range=[0.0, 3.0]), RangeParameter(name='g', parameter_type=FLOAT, range=[0.0, 10.0]), FixedParameter(name='h', parameter_type=FLOAT, value=0.0), FixedParameter(name='i', parameter_type=FLOAT, value=0.0), FixedParameter(name='j', parameter_type=FLOAT, value=0.0)], parameter_constraints=[]).

[INFO 09-30 10:48:34] ax.core.experiment: The is_test flag has been set to True. This flag is meant purely for development and integration testing purposes. If you are running a live experiment, please set this flag to False

[INFO 09-30 10:48:34] ax.core.experiment: Attached custom parameterizations [{'a': 200, 'b': 250.0, 'c': 0.0, 'd': 1, 'e': 1, 'f': 1.5, 'g': 1.0, 'h': 0.0, 'i': 0.0, 'j': 0.0}] as trial 0.

[INFO 09-30 10:48:34] ax.service.ax_client: Completed trial 0 with data: {'x': (1.5, None), 'y': (2.5, None)}.

[INFO 09-30 10:48:34] ax.modelbridge.transforms.standardize_y: Outcome x is constant, within tolerance.

[INFO 09-30 10:48:34] ax.modelbridge.transforms.standardize_y: Outcome y is constant, within tolerance.

C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\ax\modelbridge\cross_validation.py:462: UserWarning: Encountered exception in computing model fit quality: Outcome `x` was not observed.
  warn("Encountered exception in computing model fit quality: " + str(e))

[INFO 09-30 10:48:35] ax.service.ax_client: Generated new trial 1 with parameters {'a': 100, 'b': 0.3, 'c': 10.0, 'd': 5, 'e': 5, 'f': 0.0, 'g': 10.0, 'h': 0.0, 'i': 0.0, 'j': 0.0} using model BoTorch.

Trial 1: {'a': 100, 'b': 0.3, 'c': 10.0, 'd': 5, 'e': 5, 'f': 0.0, 'g': 10.0, 'h': 0.0, 'i': 0.0, 'j': 0.0}

@allen-yun
Copy link
Author

import tkinter as tk
from tkinter import simpledialog

from ax.service.ax_client import AxClient
from ax.service.utils.instantiation import ObjectiveProperties
from ax.models.torch.botorch_modular.surrogate import Surrogate
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.registry import Models

from botorch.models.gp_regression import SingleTaskGP
from botorch.acquisition.multi_objective.logei import qLogNoisyExpectedHypervolumeImprovement

This is what I'm importing as well

@Cesar-Cardoso
Copy link
Contributor

Thank you for the logs! The warning is being logged here https://github.com/facebook/Ax/blob/main/ax/modelbridge/cross_validation.py#L433-L469. The get_fit_and_std_quality_and_generalization_dict() method is used for analytics purposes only, so an exception in there shouldn't affect any of the modeling.

That being said, the exception you're seeing is coming from https://github.com/facebook/Ax/blob/main/ax/modelbridge/torch.py#L348. I would add a breakpoint on that exception and inspect the value of Xs to figure out what the discrepancy is. When I do it on my build I see the expected 'x' and 'y' keys in the dictionary.

@allen-yun
Copy link
Author

Thanks for getting back to me @Cesar-Cardoso!

How would I inspect the value of X? In the above example I only fed it one trial with an X value of 1.5 and a Y value of 2.5. I'm not sure how this would throw an exception. Also I'm considering getting rid of the thresholds for both of these objectives. Would this affect the model if I force it to only run 100 trials per experiment? If that were the case, at the end of every experiment would it just come down to evaluating ax_client.get_pareto_optimal_parameters() and then feeding the trials back to the script if I wanted to run another set of 100 trials?

@Cesar-Cardoso
Copy link
Contributor

The simplest way is probably to just add a print(Xs)

            if outcome not in Xs:
                print(Xs)
                raise ValueError(f"Outcome `{outcome}` was not observed.")

in https://github.com/facebook/Ax/blob/main/ax/modelbridge/torch.py#L348-L349 and see what you get when this exception is logged, and go from there. But again, this is all happening in our analytics logic which is only there for you to assess model performance / debugging.

I'm not sure I understand your second question well. Yes, eliminating the thresholds will affect modeling as more of the search space will be considered, including points in the Pareto front. And yes, attaching the parameters and outcomes from the first 100 trials and then running 100 more should yield you the same results than running them all at once.

@allen-yun
Copy link
Author

allen-yun commented Oct 3, 2024

Okay, trying to complete a trial gave these additional messages. I will try to add in the print line as you mentioned @Cesar-Cardoso and see what might be going on:

C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\ax\modelbridge\cross_validation.py:462: UserWarning: Encountered exception in computing model fit quality: Outcome `X` was not observed.
  warn("Encountered exception in computing model fit quality: " + str(e))
[INFO 10-03 10:04:14] ax.service.ax_client: Generated new trial 1 with parameters {'a': 160, 'b': 295, 'c': 6.70976, 'd': 1.5, 'e': 0, 'f': 2.594893, 'g': 10.0, 'h': 0.0, 'i': 0.0, 'j': 0.0} using model BoTorch.
[INFO 10-03 10:04:20] ax.service.ax_client: Completed trial 1 with data: {'X': (2.5, None), 'Y': (1.2, None)}.
[INFO 10-03 10:04:20] ax.modelbridge.transforms.standardize_y: Outcome X is constant, within tolerance.
C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\botorch\models\utils\assorted.py:260: InputDataWarning: Data is not standardized (std = tensor([0., 1.], dtype=torch.float64), mean = tensor([ 0.0000e+00, -1.1102e-16], dtype=torch.float64)). Please consider scaling the input to zero mean and unit variance.
  check_standardization(Y=train_Y, raise_on_fail=raise_on_fail)
C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\ax\modelbridge\cross_validation.py:555: RuntimeWarning: divide by zero encountered in scalar divide
  inv_model_std_quality = max_std if max_std > 1 / min_std else min_std
C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\ax\modelbridge\cross_validation.py:558: RuntimeWarning: divide by zero encountered in scalar divide
  return 1 / inv_model_std_quality
[INFO 10-03 10:04:24] ax.service.ax_client: Generated new trial 2 with parameters {'a': 100, 'b': 300, 'c': 10.0, 'd': 3.5, 'e': 0, 'f': 3.0, 'g': 10.0, 'h': 0.0, 'i': 0.0, 'j': 0.0} using model BoTorch.
Trial 2: {'a': 100, 'b': 300, 'c': 10.0, 'd': 3.5, 'e': 0, 'f': 3.0, 'g': 10.0, 'h': 0.0, 'i': 0.0, 'j': 0.0}

@allen-yun
Copy link
Author

Update:
I believe I'm starting with an empty dictionary here although I'm not sure why
defaultdict(<class 'list'>, {})

@allen-yun
Copy link
Author

So all of the exception/warning messages go away if I start with two initial trials instead of one.
Is this intended/needed when using "surrogate": Surrogate(SingleTaskGP), "botorch_acqf_class": qLogNoisyExpectedHypervolumeImprovement ?

I'm wondering if the optimization will continue to work if I only use one initial trial

@allen-yun
Copy link
Author

allen-yun commented Oct 4, 2024

Also, is there a smarter way to limit my parameter search space to only be in increments of 0.5 or to just one decimal space?

I've considered changing to choice parameters and doing something like np.arange(1, 10.5, 0.5).tolist() but was wondering if a different model would be more effective if I made all my parameters like this.

To clarify my question:
Will qNEHVI work well with entirely choice parameters or will Thompson sampling be a better route? I think the current version of Ax doesn't support multi-objective(?) I'm curious how the problem will look if each parameter can only have one decimal place, etc.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants