-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove error throwing branch on FAILURE #218
Conversation
opt.local_optimizer = Opt(:LD_LBFGS, 2) | ||
opt.min_objective = rosenbrock | ||
NLopt.equality_constraint!(opt, circ_cons, [1e-8]) | ||
(minf, minx, ret) = optimize(opt, [0.5, 0.5]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a test for the ret
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it, but I am not sure why it gives failure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? Isn't the point of the test that it's a FAILURE
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant that given the results the FAILURE
is suspicious. You are correct that the point of this test is that in such cases we shouldn't error but return the result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cause is probably the nonlinear equality constraint (which is non convex)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah but the constraint residual is below the set tolerance
julia> 1.0 -sum(abs2, minx)
6.695932697198259e-11
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, but that's why this makes sense
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #218 +/- ##
==========================================
- Coverage 69.95% 66.57% -3.38%
==========================================
Files 2 2
Lines 802 709 -93
==========================================
- Hits 561 472 -89
+ Misses 241 237 -4 ☔ View full report in Codecov by Sentry. |
Can this be merged and tagged, please? |
Why is this needed for SciML/Optimization.jl#799? |
I am using the same test as the one added here, it's a pretty standard one so it shouldn't be throwing an error |
The other option is to just relax the convergence tolerance?
|
Or we could just remove the |
I don't think this the right approach, because it avoids throwing an error for all methods that call |
Do you have an example of what should throw an error that doesn't now since the tests seem to pass so I am curious? I think this branch may need more detailed conditions, just having a |
I think you actually want to return from |
Oh I just realised what you mean in #218 (comment), I'd be happy with that it doesn't look like the error messages improve significantly over the return code right now |
Closing in favor of #221. (I've added you as a co-author) |
I didn't even know that's something you can do haha! Thanks for the help 🙌 |
You add the line |
The added test should demonstrate that it'll be useful to return the result even with
FAILURE
instead of throwing a very uninformative error.