-
-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rewrite_call_expression
through _rewrite_expr
.
#2241
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2241 +/- ##
=======================================
Coverage 91.38% 91.38%
=======================================
Files 42 42
Lines 4189 4250 +61
=======================================
+ Hits 3828 3884 +56
- Misses 361 366 +5
Continue to review full report at Codecov.
|
I just added a few tests for binary operators. |
src/macros.jl
Outdated
_rewrite_expr(_error::Function, head::Val{:call}, args...) = | ||
_rewrite_expr(_error, Val(:call), Val(args[1]), args[2:end]...) | ||
|
||
function _rewrite_expr(_error::Function, ::Val{:call}, op::Union{Val{:+}, Val{:-}, Val{:*}, Val{:/}}, args...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have a method with OP
above which does not rewrite recursively and this one which rewrite recursively ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous one is a fallback for unimplemented operators. I don't know if it's the most sensible default: I try to enable this code as rarely as possible, to avoid breaking things.
d7f8ae4
to
481c454
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we benchmark this?
I removed this approach in MathOptFormat because it was poorly inferred:
https://github.com/jump-dev/MathOptInterface.jl/pull/1111/files#diff-58997e5cd3d0fcda1923fbd4f434f673R98-R132
I did a little benchmark on this (https://github.com/dourouc05/JuMP.jl/blob/dourouc05-rewrite-complex/benchmark/parse_constraints.jl). Indeed, without writing complex things, this code may make parsing the constraints twice as slow. Before:
With this PR:
While you don't look opposed to the idea, I guess the implementation is not performant enough. I'm not sure I get the details of the code you point at. You are generating "fast" code to parse the head of an expression, right? Plus using It looks like this implementation spends a lot of time for type inference: Before:
After:
However, I don't know how to interpret these… https://github.com/thautwarm/MLStyle.jl could be a solution to implement this pattern matching, with high performance, but no extension possibilities (which is the point, here). |
You could generate a single constraint that is bridged into several constraints. I'd prefer not having constraint returning several MOI constraints |
Which "single" constraint? Does MOI need a way to represent constraints like That's why I think it would make sense, at the modelling layer, to decompose complex constraints into bite-sized MOI constraints, at the lowest level of abstraction that makes sense based on the solver APIs (that would mean adding things like |
Yes. This is the point I settled on. Create a giant |
That would defeat the whole point of this, which is to allow extensions of JuMP syntax. Or maybe there is a way to keep a fast path for the common case, and keep the flexibility for extensions (with a performance penalty)? |
It seems your use case is similar to disciplined convex reformulation. In JuMP, you need to write your convex model as constraints that are either |
Indeed, it's the same issue, as DCP may decompose a single expression in many constraints. For now, I can live without the possibility to remove CP constraints :)! (By the way, I didn't see any documents giving ideas on how to implement bridges on top of these nonlinear expressions, that's why I didn't think this issue was relevant.) |
@odow I've been benchmarking this code a tad more, and it looks like my previous figures where wrong (I was using Revise to test the latest version of the code). When restarting Julia between runs (i.e. with and without this PR), I cannot find a real performance difference. Before:
After:
I've run both quite a few times, and the figures never wander too far from these (give or take one or two milliseconds, for both). I've also tested to replace all these functions by a single one and a large |
I think the difference will be mainly on the compilation since it will need to compile many more methods due to |
By the way, is performance really important? Most use cases I can think of should rather be implemented in nonlinear bridges rather than this way (everything, except overloading operators like |
I'm a bit hesitant because it significantly increase compile time |
I made a few benchmarks on my own. JuMP 0.21.5:
This PR (not rebased on current JuMP):
Details of my version of Julia:
The difference in timing is not that large (although it's far from a scientific comparison). |
We should also compare the compilation time of |
I already did some: #2241 (comment) |
Excerpt from #2051. New take on #2229, much more general (and not much longer — unless you take fdef385, which unfortunately does not work…).
I'd prefer to see this PR merged instead of #2229, due to the functionalities that are possible with this, but maybe this is not the best way to implement it…
The general goal is to allow rewriting expressions at the JuMP level, only based on syntactic information, to provide more convenience for users. This only provides more points for extensions to plug into JuMP.
More in details (based on the current developments of the CP sets): the modelling layer can decide to rewrite an expression like
y <= count(x .== 5)
as two constraints, a CP one (sayz == count(x .== 5))
, and a purely linear one (sayy <= z
), all of this at JuMP layer. These new constraints can then be bridged by MOI if needed. The constraints to model in the first hand (likey <= count(x .== 5))
) should not have MOI equivalents (because that would create a very very large number of sets to support: instead of just Count, you would need LessThanCount, maybe AffineLessThanCount later on, and so on).This thus raises the issue of deleting the new constraints: as a user, it only makes sense to have a reference to the constraint
y <= count(x .== 5)
; to delete it, two constraints and a variable must be deleted. The existing system of bridges does not work, as it takes as input a MOI constraint (which, by design, the user-specified constraint have a low likelihood to be). Nothing is implemented regarding this. My idea is to have_rewrite_expr
to return the new things that the function call created (variables, constraints), so that JuMP can get rid of all of it when deleting the constraint.