Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Double inversion for transformed minimization targets #460

Closed
AdrianSosic opened this issue Jan 13, 2025 Discussed in #459 · 5 comments · Fixed by #462
Closed

Double inversion for transformed minimization targets #460

AdrianSosic opened this issue Jan 13, 2025 Discussed in #459 · 5 comments · Fixed by #462
Assignees
Labels
bug Something isn't working

Comments

@AdrianSosic
Copy link
Collaborator

AdrianSosic commented Jan 13, 2025

As described in #459, there is an unintended second inversion of minimization targets when bounds are involved, as they add a corresponding additional transformation to the target, resulting in:

  • one inversion happening when going from experimental to computation target values and
  • another inversion (i.e. the desired one) happening when evaluating the acquisition values, introduced in MIN mode via acquisition function #340

Minimal Example

import numpy as np
import pandas as pd

from baybe.parameters.numerical import NumericalContinuousParameter
from baybe.recommenders import BotorchRecommender
from baybe.targets.numerical import NumericalTarget

searchspace = NumericalContinuousParameter("p", [0, 1]).to_searchspace()
objective = NumericalTarget("t", "MIN", (0, 1)).to_objective()
recommender = BotorchRecommender()
measurements = pd.DataFrame({"p": np.linspace(0, 1, 100), "t": np.linspace(0, 1, 100)})
rec = recommender.recommend(1, searchspace, objective, measurements)
print(rec)

Sketch of Fix

The problem can be fixed by setting the descending argument on the following line to False

return partial(linear_transform, descending=True)

However, this would result in an incorrect behavior for targets entering a desirability objective, for which the inversion is needed. Unfortunately, the information whether an single target or a desirability objective is used is unavailable at this point. So we need to find an alternative hotfix until a "proper" solution is implemented (which is around the corner for Pareto)

@AdrianSosic AdrianSosic added the bug Something isn't working label Jan 13, 2025
@AdrianSosic
Copy link
Collaborator Author

I guess the hotfix solution could be to simply (temporarily) invert a third time in SingleTargetObjective when triggering the transformation on a MIN target. Pretty dumb, but it ain't dumb if it works 😄 can try

@AdrianSosic
Copy link
Collaborator Author

AdrianSosic commented Jan 13, 2025

@Scienfitz, @Alwalid-Abushanab, a hotfix is up under hotfix/minimization. Can one of you confirm if it works?

@AdrianSosic AdrianSosic self-assigned this Jan 13, 2025
@AVHopp
Copy link
Collaborator

AVHopp commented Jan 14, 2025

I briefly executed your exact code as well as minor variations (changing bounds a bit) of it. The recommendation is consistently equal to

     p 
0   0.0

which is what we expect, right?

@AdrianSosic
Copy link
Collaborator Author

Yes. But I was hoping more on some alternative example / doing some other sanity check. I tested my example above + the original example posted in #459

@AVHopp
Copy link
Collaborator

AVHopp commented Jan 14, 2025

Then let me do some more testing.

AdrianSosic added a commit that referenced this issue Jan 15, 2025
This PR hot-fixes #460 by multiplying the output of the
`objective.transform` with -1 if the target is bounded and to be
minimized.

This PR also introduces a test and a small example for verifying the
desired behavior. The solution implemented here is only temporary and
will be replaced with a proper mechanism soon.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants