-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE REQUEST] modify Ax API to allow for callable that evaluates a constraint and is passed to the optimizer #769
Comments
Hi @sgbaird! Let me think through this a bit and get back to you! |
Hi again @sgbaird! We discussed this and here's what we're thinking:
What do you think? cc @Balandat and @dme65 for more thoughts on this, also : ) |
This is great! I'm looking forward to working on this. As for the constraints, could you clarify if the idea is for me to work on my problem-specific constraints or produce a general interface for constraints? For my problem-specific constraint (limiting the maximum number of components), after giving it some thought, it seems that the only transforms that affect the constraint are For a more general interface or template, I'd need to give it some more thought and base it off of a few example constraints people might want to implement. |
Definitely just the problem-specific constraints! If you end up so liking working on Ax that you want to work on a general interface, of course we'll appreciate all the help we can get, but certainly didn't mean to suggest that you'd be on the hook for that : )
I think that might be right; let me loop in @Balandat for that one! |
So I also know that @dme65 has started to think about this, so maybe he has additional thoughts. |
@Balandat good point. I was mistakenly thinking of |
Hi @sgbaird, We have added support for non-linear inequality constraints in BoTorch which you need for your ||x||_0 <= 3 constraint: pytorch/botorch#1067. The next step will be for us to figure out the best way to expose this functionality in Ax. I'm attaching a notebook in the meantime that shows how to use this in BoTorch (you need to be on the main branch) for a toy problem with a similar constraint. The attached file is a notebook, but Github doesn't like .ipynb so I uploaded it as a .txt: |
In terms of adding this to Ax, we have two paths:
While path 1 doesn't require any changes to Ax, we need support for SAASBO first, so I think we can start with path 2? I can put up a PR for 2.a sometime this week. @dme65 does this sound reasonable? @sgbaird, I think in the meantime you can start thinking about how you will formulate your constraints! The format should be as expected by |
Wonderful! I will get started on the constraint formulation. Very excited! |
@dme65 I was able to run your example notebook. Very nice! Really appreciated the explanations, so thank you for the Jupyter Notebook format. At first, I thought why not use |
Following up on my comment above: #769 (comment), @dme65 |
@lena-kashtelyan ok, no worries! Thank you for the heads up |
Summary: See title. This diff isn't ready for review by any means, but is an attempt to temporarily unblock facebook#769. Differential Revision: D33853834 fbshipit-source-id: 5c00203ff438226010693cb04a2694c0dbb0cea7
I took a quick stab at enabling this in Ax in #794. I think this works, but I haven't really tested it thoroughly and the PR will need some work before we can actually merge it which will have to wait until @lena-kashtelyan is back. Attaching a notebook with an Ax version of the example shared previously. This will require BoTorch main + #794 to work. The notebook has a few caveats which are listed at the top. I hope this will unblock you for now @sgbaird. |
@dme65 this is great, thank you!! Just to clarify, I should still be able to pass in a linear inequality constraint to Also, I'm getting occasional assertion warnings (changed these to warnings to probe a bit more) where it's suggesting for arm in trial.arms:
arm._parameters = {k: 0.0 if v < 1e-3 else v for k, v in arm.parameters.items()}
n_comp = sum([v > 1e-3 for v in arm.parameters.values()])
if n_comp > 3:
warnings.warn(f"n_comp == {n_comp} ! <= 3, v: {arm.parameters}")
I'm guessing this has to do with it being a soft constraint and the width of the Gaussian? Constraint violation seems to get worse as the optimization progresses (in some cases, with all 6 components being non-zero) and even with a narrower Gaussian ( |
@sgbaird My guess is that you aren't actually on BoTorch main which means that BoTorch ignores the non-linear inequality constraints (reinstalling Ax probably installed a stable version of BoTorch). Can you try to pull from the main branch from BoTorch and then reinstall BoTorch? Yeah, it should be possible to pass these constraints down. I think the
You'll also have to make sure that the initial Sobol points satisfy these additional constraints! |
🤦 Pulled from main, reinstalled BoTorch ( >>> import ax
>>> ax.__version__
'0.1.18.dev922'
>>> import botorch
>>> botorch.__version__
'0.6.1.dev31+gc564e333' That seems to have fixed it. Also, for completeness, I forgot to mention that I also ended up needing to
Awesome, I will give this a try. |
Ah, my bad. I was passing down your |
@dme65 thanks for clarifying, and no worries. I ended up using |
I think I misunderstood what you meant earlier. Linear equality constraints are actually supported if you pass them down directly to BoTorch as I did in my suggestion earlier:
I experimented with this equality constraint a bit and it looks like SLSQP actually handles this without any issues, so you have the option of doing what @bernardbeckerman suggested or directly pass down the equality constraint to BoTorch. |
@sgbaird, is there anything you need here currently or did you get all the help you needed? |
@lena-kashtelyan thanks for checking in! I think I have everything I need for now. It would still be nice to have the option of passing in a predefined list in addition to (the very much coveted) continuous constraints that I'm very excited about; but I've been trying not to ask for everything. You and others have been so helpful! I've been working on using Ax and got some nice results with both default settings and SAASBO (the latter of which I think set a new benchmark for a certain materials science problem) with hyperparameter optimization. Still working on the other projects, and excited to share the progress. |
Great news, @sgbaird, thank you for sharing an update! Let me close this issue for now then, but please feel free to reopen when there should be more discussion (or when you want to share the results / paper you are referring to) : ) |
@Osburg and @jduerholt did some nice testing of SLSQP + narrow Gaussians vs. another method experimental-design/bofire#145. Relevant comment by @Osburg in experimental-design/bofire#145 (comment) based in part on empirical testing.
|
Yeah I think the narrow Gaussian approximation indeed can have vanishing gradients very quickly. One option to deal with this would be to use a distribution with heavier tails so that the gradients don't vanish as quickly and the optimizer can make progress. cc @SebastianAment who has been thinking about similar things a lot recently. |
Hi, I'm not sure how this has gone since the initial thread and (very informative!) discussions; I've tried to run the notebook from @dme65 but is getting the error:
Short question, would a reinstall of my scipy version solve this? |
@Abrikosoff do you have a full repro of this? Looks like somehow the args get passed to the wrong function, but it will be much easier to debug if you can provide a self-contained example. Thanks! |
@Balandat Thanks for the quick reply! As you predicted, this was a problem of passing the args to the wrong function; for completeness I include below the (slight) rewrite of the code from @dme65 that worked for me:
and of course, this implementation is not compatible with the Service API. however I am still curious as to why the original implementation did not work; was it due to incompatibility between versions of BoTorch? FWIW, I ran the original linked notebook with no modifications. The only difference, as far as I can see, is that instead of doing optimize_acqf as done here, the original was in the form
|
Are you sure this is exactly what you ran? The following error reported above seems to suggest you used
|
@Balandat Thanks for the reply! You are right, I posted an attempted correction by mistake; for completeness, the following is what happened: the original code (the relevant section) was in this form:
When I ran this, the "
(posted above), at which point the error I am suspecting that this is a version problem, but I am a bit mystified TBH. Any help would be great appreciated! |
As has popped up in various other issues (#727, #745, #750) and per the discussion in a recent meeting with @lena-kashtelyan, @Balandat, and @bernardbeckerman, there is a need for being able to "allow Ax to take in some callable that evaluates the constraint and that we can pass to the optimizer" #745 (comment), albeit as "an excellent way for people to shoot themselves in the foot":
I started digging through the Ax code but had trouble identifying where this might happen. I was using the service API, setting breakpoints, and stepping into functions. Any ideas on where this callable would get passed to the optimizer?
The text was updated successfully, but these errors were encountered: