Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/non linear q #1793

Closed
wants to merge 28 commits into from
Closed

Conversation

jduerholt
Copy link
Contributor

Motivation

This PR adds the possibilities of q>1 for optimizing acqfs with nonlinear constraints. I just removed the blocking of q>1 in gen_candidates_scipy and tested it with NChooseK constraints. It worked. So I am wondering, why it was blocked originally. Any idea?

I can also provide an example showing that it works.

Have you read the Contributing Guidelines on pull requests?

Yes.

Test Plan

Unit tests.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Apr 16, 2023
@codecov
Copy link

codecov bot commented Apr 16, 2023

Codecov Report

Merging #1793 (21a260c) into main (62949cc) will not change coverage.
The diff coverage is 100.00%.

❗ Current head 21a260c differs from pull request most recent head 24a778d. Consider uploading reports for the commit 24a778d to get more accurate results

@@            Coverage Diff             @@
##              main     #1793    +/-   ##
==========================================
  Coverage   100.00%   100.00%            
==========================================
  Files          170       170            
  Lines        14773     14920   +147     
==========================================
+ Hits         14773     14920   +147     
Impacted Files Coverage Δ
botorch/acquisition/monte_carlo.py 100.00% <ø> (ø)
...tion/multi_objective/multi_output_risk_measures.py 100.00% <ø> (ø)
botorch/generation/gen.py 100.00% <ø> (ø)
botorch/acquisition/cached_cholesky.py 100.00% <100.00%> (ø)
botorch/acquisition/input_constructors.py 100.00% <100.00%> (ø)
botorch/acquisition/joint_entropy_search.py 100.00% <100.00%> (ø)
...sition/multi_objective/max_value_entropy_search.py 100.00% <100.00%> (ø)
botorch/acquisition/utils.py 100.00% <100.00%> (ø)
botorch/models/approximate_gp.py 100.00% <100.00%> (ø)
botorch/models/fully_bayesian.py 100.00% <100.00%> (ø)
... and 9 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@Balandat
Copy link
Contributor

Nice.

I just removed the blocking of q>1 in gen_candidates_scipy and tested it with NChooseK constraints. I can also provide an example showing that it works.

An example would be great. I wonder if a short-but-sweet mini-tutorial for parameter constraints (incl. a simple linear constraint and e.g. more complicated NChooseK constraints) would make sense?

So I am wondering, why it was blocked originally. Any idea?

I think it may just have been something we didn't test / weren't sure about when we first added the experimental support for nonlinear constraints.

@Balandat Balandat self-requested a review April 16, 2023 16:12
@Balandat Balandat self-assigned this Apr 16, 2023
@jduerholt
Copy link
Contributor Author

An example would be great. I wonder if a short-but-sweet mini-tutorial for parameter constraints (incl. a simple linear constraint and e.g. more complicated NChooseK constraints) would make sense?

So you recommend a small jupyter notebook within tutorials?

@Balandat
Copy link
Contributor

So you recommend a small jupyter notebook within tutorials?

Yes, I think that would be great. Maybe a simple synthetic function and then show a few different ways to optimize an acquisition function subject to parameter constraints (both linear and nonlinear). No need to make a full Bayesian Optimization loop out of it.

@jduerholt
Copy link
Contributor Author

Ok, sounds good. Do you want to have it as part of this PR, or specific one?

@jduerholt
Copy link
Contributor Author

Small update on this PR, we should not yet merge it. It looks that it was somehow by coincidence that the non-linear constraint was fulfilled for q>1. In more elaborate experiments, I found that this is not always the case. I have to dig there a bit deeper ;)

@Balandat
Copy link
Contributor

If you can do it as part of this PR that would be great (also as a test plan). But if you’d like to get this in soon and need some more time to put together the tutorial that’s fine, too. Thanks!

@jduerholt
Copy link
Contributor Author

Could be that it takes a bit longer, let's see. I overlooked something, but now found out what it was, which is already an advance ;)

@jduerholt
Copy link
Contributor Author

The point is the following:

Just commenting out applies the constraint over the flattened q-batch, which means the constraint is applied inter-point wise, but we want to have it always as intra-point, or do we want to have options for both? I would say, always go for intra-point is sufficient.

@Balandat
Copy link
Contributor

Just commenting out applies the constraint over the flattened q-batch, which means the constraint is applied inter-point wise, but we want to have it always as intra-point, or do we want to have options for both? I would say, always go for intra-point is sufficient.

Hmm and the additional complexity compared to the linear constraints is that there is no standardized interface for passing in intra- or inter-point constraints, right? And we'd like to make it easier for folks to apply a nonlinear constraint that's defined for a single element of the domain to all elements of a q-batch, right? I think then restricting to intra-point should be ok, so long as we make clear in the docstring that that's the restriction.

@jduerholt
Copy link
Contributor Author

Other option would be to change the signature of the nonlinear constraints from Optional[List[Callable]] to Optional[List[Tuple[Callable, bool]]] where the bool indicates if it is intra or interpoint ...

@Balandat
Copy link
Contributor

Other option would be to change the signature of the nonlinear constraints from Optional[List[Callable]] to Optional[List[Tuple[Callable, bool]]] where the bool indicates if it is intra or interpoint ...

Hmm that's an interesting idea as well. I guess this is a rather advanced feature so I wouldn't be too worried about the interface being a bit clunkier, so that sounds good to me if we think that there will be use cases for non-linear inter-point constraints.

@jduerholt
Copy link
Contributor Author

One can come up with use cases, the question is how realistically they are. But, in general I think it is good to keep it as flexible as possible, I will try to implement it as described above.

@jduerholt
Copy link
Contributor Author

I found some time to implement the prototype of what I had in mind.

This script tests it on the 6dim Hartmann, with NChooseK. Most of the times it seems to work but sometimes it returns candidates where the constraint eval leads to values that are substantially smaller than zero. Not sure what is going on there:

import torch
from botorch.acquisition import ExpectedImprovement, qExpectedImprovement
from botorch.fit import fit_gpytorch_model
from botorch.models import SingleTaskGP
from botorch.models.transforms import Standardize
from botorch.optim import optimize_acqf
from botorch.test_functions import Hartmann
from gpytorch.mlls import ExactMarginalLogLikelihood
from torch.quasirandom import SobolEngine

def narrow_gaussian(x, ell):
    return torch.exp(-0.5 * (x / ell) ** 2)


def ineq_constraint(x, ell=1e-3):
    # Approximation of || x ||_0 <= 3. The constraint is >= 0 to conform with SLSQP
    return narrow_gaussian(x, ell).sum(dim=-1) - 3


def get_feasible_sobol_points(n):
    """Sobol sequence where we randomly set three of the parameters to zero to satisfy the constraint"""
    X = SobolEngine(dimension=6, scramble=True).draw(n).to(torch.double)
    inds = torch.argsort(torch.rand(n, 6), dim=-1)[:, :3]
    X[torch.arange(X.shape[0]).unsqueeze(-1), inds] = 0
    return X


def get_batch_initial_conditions(num_restarts, raw_samples, q, acqf):
    X = get_feasible_sobol_points(raw_samples*q).unsqueeze(1)
    X = X.reshape((torch.Size((raw_samples,q,6))))
    acq_vals = acqf(X)
    return X[acq_vals.topk(num_restarts).indices]


hartmann = Hartmann(dim=6)

X = get_feasible_sobol_points(10)
Y = hartmann(X).unsqueeze(-1)
print(f"Best initial point: {Y.min().item():.3f}")

gp = SingleTaskGP(X, Y, outcome_transform=Standardize(m=1))
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
fit_gpytorch_model(mll)

EI = qExpectedImprovement(model=gp, best_f=Y.min(), maximize=False)

batch_initial_conditions = get_batch_initial_conditions(num_restarts=1, raw_samples=512, acqf=EI, q=3)

print(batch_initial_conditions)

candidate, acq_value = optimize_acqf(
    EI,
    bounds=torch.cat((torch.zeros(1, 6), torch.ones(1, 6))),
    q=3,
    nonlinear_inequality_constraints=[ineq_constraint],
    batch_initial_conditions=batch_initial_conditions,
    num_restarts=20,
)

print(candidate)

print(ineq_constraint(candidate))

@Balandat
Copy link
Contributor

sometimes it returns candidates where the constraint eval leads to values that are substantially smaller than zero. Not sure what is going on there:

Interesting; I'll take a look.

@Balandat
Copy link
Contributor

Balandat commented May 2, 2023

I looked into this a bit, and every time the constraints are infeasible it's b/c scipy.minimize failed. Reasons in the warnings include

OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 8 and message Positive directional derivative for linesearch.')]

or

OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 5 and message Singular matrix E in LSQ subproblem.')

There also seem to be some failures that don't result in infeasible candidates, but those are rare. haven't figured out what's going on exactly, but this simply seems to be due to the optimization not terminating properly.

Out of 100:
No warnings, feasible: 91
No warnings, infeasible: 0
1 warning, feasible: 1
1 warning, infeasible: 8

@jduerholt
Copy link
Contributor Author

Hi Max,

thanks for investigating this.

So how to proceed? Is the proposed solution fine for you?

Should we remove the old solution which was somehow able for interpoint constraints or keep it and give the user the possibility to toggle between them?

Best,

Johannes

@jduerholt
Copy link
Contributor Author

Hi Max,

any opinion to the question from above?

Best,

Johannes

@Balandat
Copy link
Contributor

Sorry, I missed this. Yeah, let's give the user the possibility to toggle between inter-point and intra-point constraints.

@Balandat
Copy link
Contributor

Balandat commented Nov 2, 2023

@jduerholt checking in on the status of this - are you still planning on merging this in? Anything needed from our end at this point?

@jduerholt
Copy link
Contributor Author

Hi Max, it is still on my menu. I hope to get it finalized in November. There were a lot of other things to do, so I had to reprioritize ...

@Balandat
Copy link
Contributor

Balandat commented Nov 2, 2023

No worries, looking forward to it :)

@jduerholt
Copy link
Contributor Author

Hi @Balandat,

I think I am done with all your suggestions. You can have a look again.

Best,

Johannes

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Mostly cosmetic changes, once those are in I'm happy to merge this

botorch/generation/gen.py Outdated Show resolved Hide resolved
botorch/generation/utils.py Outdated Show resolved Hide resolved
botorch/optim/optimize.py Outdated Show resolved Hide resolved
botorch/optim/optimize.py Show resolved Hide resolved
botorch/optim/optimize.py Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
_flip_sub_unique,
_remove_fixed_features_from_optimization,
)
from botorch.utils.testing import BotorchTestCase, MockAcquisitionFunction


class TestGenerationUtils(BotorchTestCase):
def test_convert_nonlinear_inequality_constraints(self):
for dtype in (torch.float, torch.double):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think you need to test for different dtypes here? Not used right now...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is not needed, as I am not doing anything with tensors here, so I removed it.

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks!

@facebook-github-bot
Copy link
Contributor

@Balandat merged this pull request in e2cadcd.

@jduerholt jduerholt mentioned this pull request Jan 15, 2024
facebook-github-bot pushed a commit that referenced this pull request Jan 17, 2024
Summary:
<!--
Thank you for sending the PR! We appreciate you spending the time to make BoTorch better.

Help us understand your motivation by explaining why you decided to make this change.

You can learn more about contributing to BoTorch here: https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md
-->

## Motivation

When implementing the possibility to use `q>1` for nonlinear constraints in PR #1793, I forgot to update the `batch_limit` check when commenting the check back in. Thus, the feature is still not usable. Here, it is fixed and it is actually tested that it is working with `q>1`.

Before, only the lower level methods were tested with `q>1`, but not `optimize_acqf`. This demonstrates again, that one cannot write to less tests :D

cc: Balandat

### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)?

Yes.

Pull Request resolved: #2168

Test Plan: Unit tests.

Reviewed By: saitcakmak

Differential Revision: D52786447

Pulled By: Balandat

fbshipit-source-id: 58d5eb2da4298849bdd97aec92eb96c363ef87e0
stefanpricopie pushed a commit to stefanpricopie/botorch that referenced this pull request Feb 27, 2024
Summary:
<!--
Thank you for sending the PR! We appreciate you spending the time to make BoTorch better.

Help us understand your motivation by explaining why you decided to make this change.

You can learn more about contributing to BoTorch here: https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md
-->

## Motivation

When implementing the possibility to use `q>1` for nonlinear constraints in PR pytorch#1793, I forgot to update the `batch_limit` check when commenting the check back in. Thus, the feature is still not usable. Here, it is fixed and it is actually tested that it is working with `q>1`.

Before, only the lower level methods were tested with `q>1`, but not `optimize_acqf`. This demonstrates again, that one cannot write to less tests :D

cc: Balandat

### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)?

Yes.

Pull Request resolved: pytorch#2168

Test Plan: Unit tests.

Reviewed By: saitcakmak

Differential Revision: D52786447

Pulled By: Balandat

fbshipit-source-id: 58d5eb2da4298849bdd97aec92eb96c363ef87e0
saitcakmak added a commit to saitcakmak/botorch that referenced this pull request Nov 15, 2024
Summary: Legacy non-linear inequality constraint format has been deprecated since pytorch#1793.

Differential Revision: D66003879
saitcakmak added a commit to saitcakmak/botorch that referenced this pull request Nov 15, 2024
Summary:

Legacy non-linear inequality constraint format has been deprecated since pytorch#1793.

Reviewed By: esantorella

Differential Revision: D66003879
saitcakmak added a commit to saitcakmak/botorch that referenced this pull request Nov 15, 2024
Summary:

Legacy non-linear inequality constraint format has been deprecated since pytorch#1793.

Reviewed By: esantorella

Differential Revision: D66003879
saitcakmak added a commit to saitcakmak/botorch that referenced this pull request Nov 15, 2024
Summary:

Legacy non-linear inequality constraint format has been deprecated since pytorch#1793.

Reviewed By: esantorella

Differential Revision: D66003879
saitcakmak added a commit to saitcakmak/botorch that referenced this pull request Nov 15, 2024
Summary:

Legacy non-linear inequality constraint format has been deprecated since pytorch#1793.

Reviewed By: esantorella

Differential Revision: D66003879
facebook-github-bot pushed a commit that referenced this pull request Nov 15, 2024
Summary:
Pull Request resolved: #2627

Legacy non-linear inequality constraint format has been deprecated since #1793.

Reviewed By: esantorella

Differential Revision: D66003879

fbshipit-source-id: a1ddfa26182f74b80c0c8aa17c801ad74b81da04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants