Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling candidate generation with both non_linear_constraints and fixed_features #1912

Closed
wants to merge 16 commits into from

Conversation

jduerholt
Copy link
Contributor

@jduerholt jduerholt commented Jul 3, 2023

Motivation

This PR adds the possibility to use fixed_features in combination with nonlinear_inequality_constraints as discussed in issue #1904.

@Balandat: concerning the solution that you suggested: I think it is much easier as the f_np_wrapper is also used for the nonlinear constraints:

f_np_wrapper=f_np_wrapper,
and fix_features is already applied there:
X_fix = fix_features(X, fixed_features=fixed_features)

So no need for another wrapper, or?

A wrapper function would be only needed if one wants to have the possibility to use the nonlinear constraints also with the reduced domain, then one would need a method, which takes the reduced X as input and augments it with the fixed features.

Here is some example code, that shows that it works:

import torch
from botorch.models import SingleTaskGP
from botorch.fit import fit_gpytorch_mll
from botorch.utils import standardize
from gpytorch.mlls import ExactMarginalLogLikelihood
from botorch.utils.sampling import get_polytope_samples

def constraint(X):
    return torch.sum(X[...,:3], dim=-1) -1.80

train_X = torch.rand(10, 3)
Y = 1 - torch.norm(train_X[:,:2] - 0.5, dim=-1, keepdim=True)
Y = Y + 0.1 * torch.randn_like(Y)  # add some noise
train_Y = standardize(Y)

gp = SingleTaskGP(train_X, train_Y)
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
fit_gpytorch_mll(mll)

from botorch.acquisition import UpperConfidenceBound

UCB = UpperConfidenceBound(gp, beta=0.1)

from botorch.optim import optimize_acqf


bounds = torch.stack([torch.zeros(3), torch.ones(3)])

batch_initial_conditions = get_polytope_samples(
    n=5,
    bounds=bounds,
    inequality_constraints=[(torch.tensor([0,1,2]),torch.tensor([1.,1.,1.]),1.8)],
    n_burnin=1000
).unsqueeze(-2)

candidate, acq_value = optimize_acqf(
    UCB, 
    bounds=bounds, 
    q=1, 
    num_restarts=5, 
    raw_samples=20,
    fixed_features={2:0.5}, 
    nonlinear_inequality_constraints = [constraint],
    batch_initial_conditions = batch_initial_conditions
)
print(candidate)  # tensor([0.4887, 0.5063])
print(candidate.sum())

Have you read the Contributing Guidelines on pull requests?

Yes.

Test Plan

Unit tests.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Jul 3, 2023
@codecov
Copy link

codecov bot commented Jul 3, 2023

Codecov Report

Merging #1912 (dfa9b64) into main (38e2027) will not change coverage.
The diff coverage is 100.00%.

❗ Current head dfa9b64 differs from pull request most recent head 0e71cdb. Consider uploading reports for the commit 0e71cdb to get more accurate results

@@            Coverage Diff            @@
##              main     #1912   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          173       173           
  Lines        15232     15290   +58     
=========================================
+ Hits         15232     15290   +58     
Impacted Files Coverage Δ
botorch/models/contextual.py 100.00% <ø> (ø)
botorch/models/pairwise_gp.py 100.00% <ø> (ø)
botorch/models/utils/assorted.py 100.00% <ø> (ø)
botorch/generation/gen.py 100.00% <100.00%> (ø)
botorch/generation/utils.py 100.00% <100.00%> (ø)
botorch/models/kernels/contextual_lcea.py 100.00% <100.00%> (ø)
botorch/models/likelihoods/pairwise.py 100.00% <100.00%> (ø)
botorch/optim/parameter_constraints.py 100.00% <100.00%> (ø)
botorch/utils/probability/utils.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think this makes sense. If there are multiple fixed features it's going to be slower than optimizing over the reduced domain since the optimizer needs to (i) identify the fixed features and (ii) SLSQP scales pretty poorly with the problem dimension anyway.

But that's probably fine for now (maybe add this comment in code for future reference?

tensor([0.4887, 0.5063])

This doesn't seem to be the right output? Should be 3dim with the third feature fixed to 0.5?

test/optim/test_optimize.py Outdated Show resolved Hide resolved
nonlinear_inequality_constraints=[nlc1, nlc2, nlc3, nlc4],
batch_initial_conditions=batch_initial_conditions,
num_restarts=num_restarts,
fixed_features={0: 2},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm so with this setup this test may pass even if fixing the features doesn't work (given that it only narrows this to a specific solution of the previous solution set). Can you use a different value here? I might also make sense to use a different mock acquisition function so that the optimizer is unique...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will have a deeper look to the test. All this mocking is sometimes a bit hard for me to understand ;)

Copy link
Contributor

@Balandat Balandat Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is actually any mocking going on here - it's just that mock_acq_function is a SquaredAcquisitionFunction rather than a true acquisition function.

If you don't want to change that, one thing you could do would be to just replace one of the constraints to result in a different optimum / optimizer.

@Balandat
Copy link
Contributor

Balandat commented Jul 3, 2023

Test failure seems unrelated I'll take a look at that

@jduerholt
Copy link
Contributor Author

Yeah I think this makes sense. If there are multiple fixed features it's going to be slower than optimizing over the reduced domain since the optimizer needs to (i) identify the fixed features and (ii) SLSQP scales pretty poorly with the problem dimension anyway.

I am already working on the proper solution with the reduced domain. I will give it a few more minutes and if I think it is feasible without taking too much time, I will add it to the PR.

This doesn't seem to be the right output? Should be 3dim with the third feature fixed to 0.5?

You care correct, this is copied from the botorch website minimal example
image
and I forgot to remove it :D

jduerholt and others added 4 commits July 3, 2023 16:25
@jduerholt
Copy link
Contributor Author

Hi @Balandat,

I implemented now also the solution using the reduced domain. Which one do you prefer?

Best,

Johannes

@jduerholt jduerholt requested a review from Balandat July 3, 2023 22:07
Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, just a couple of minor style nits. Thanks for putting this up!

botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
test/optim/test_parameter_constraints.py Outdated Show resolved Hide resolved
test/optim/test_parameter_constraints.py Outdated Show resolved Hide resolved
test/optim/test_parameter_constraints.py Outdated Show resolved Hide resolved
jduerholt and others added 4 commits July 4, 2023 07:52
Co-authored-by: Max Balandat <Balandat@users.noreply.github.com>
@jduerholt
Copy link
Contributor Author

@Balandat, I incorporated your changes and added another test which was still missing (from my perspective). From my side, this is now ready for merge.

@jduerholt jduerholt requested a review from Balandat July 4, 2023 13:37
@Balandat
Copy link
Contributor

Balandat commented Jul 4, 2023

Great, thanks you. I'll pull it in and merge it.

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@SebastianAment
Copy link
Contributor

Test failure seems unrelated I'll take a look at that

This was solved by #1919, should pass upon re-running the tests.

Copy link
Contributor

@SebastianAment SebastianAment left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this @jduerholt! Mainly left cosmetic suggestions that will aid readability, as well as completed type signatures.

Last, could you rename the PR something a little more descriptive like "Enabling candidate generation with both non_linear_constraints and fixed_features"? This will make it easier to understand what's going on when browsing the commit history on a high level.

constraints: Optional[List[Callable]],
fixed_features: Dict[int, float],
dimension: int,
) -> Optional[List[Callable]]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two subjective suggestions to make this quicker to parse:

  1. How about _gen_nonlin_constraints_of_variable_features? Could apply same convention to the linear case below. (cc @Balandat)
  2. Could you add a doc-string here, something like "Given a dictionary of fixed_features, returns a list of non-linear constraints on the variable (non-fixed) features."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a docsting. Concerning renaming: should I do it, or should it be done in a seperate PR for both methods (nonlinear and linear) to keep it consistent?

Copy link
Contributor

@SebastianAment SebastianAment Jul 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! If you're up for it, you can do both in this commit. The only mentions of _generate_unfixed_lin_constraints are in:

  • generation/utils.py,
  • optim/parameter_constraints.py, and
  • test/optim/parameter_constraints.py.

A quick search and replace in these files should do it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but do you really prefer the new name? What do you do not like about _generate_unfixed_lin_constraints?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But ofc, I can do it ;)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_generate_unfixed_(non)lin_constraints makes it sound like the constraints are "unfixed", whereas it is the features that are. But let's keep it then, especially since the docstring makes it clear now. Thanks for adding it!

botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/optim/parameter_constraints.py Outdated Show resolved Hide resolved
botorch/generation/utils.py Outdated Show resolved Hide resolved
botorch/generation/utils.py Outdated Show resolved Hide resolved
Co-authored-by: Sebastian Ament <56886879+SebastianAment@users.noreply.github.com>
@jduerholt jduerholt changed the title Non linear and fixed Enabling candidate generation with both non_linear_constraints and fixed_features Jul 6, 2023
@facebook-github-bot
Copy link
Contributor

@SebastianAment has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@Balandat merged this pull request in 6f98187.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants