-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complete Plasmo.jl rewrite and updates for JuMP nonlinear interface #105
Conversation
edge_pointer = optiedge.backend.optimizers[graph.id] | ||
dual_value = MOI.get(edge_pointer, MOI.ConstraintDual(), linkref) | ||
return dual_value | ||
function JuMP.set_objective_function(graph::OptiGraph, expr::JuMP.AbstractJuMPScalar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if there needs to be a separate function for this when expr is a single NodeVariableREf and not an expression. If I run the code below, it throws an error, and if I query the moi_backend's optimizer.is_objective_function_set
, it returns false.
using Plasmo, HiGHS
g = OptiGraph()
set_optimizer(g, HiGHS.Optimizer)
@optinode(g, ntest)
@variable(ntest, var[1:2] >= 0)
@objective(g, Min, ntest[:var][1])
optimize!(g)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not quite sure what is going on here. A variable objective function works with other optimizers like Ipopt. It does not look like JuMP does anything special to handle this case, but it definitely works with HiGHS. I'll make an issue for this if this PR doesn't resolve it.
JuMP.delete(JuMP.owner_model(nvref), JuMP.BinaryRef(nvref)) | ||
return nothing | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You had asked if there was any other JuMP methods we might be missing. This is not essential, but having the JuMP.relax_integrality
could be convenient. Maybe this is just a # TODO:
for now, but it could be nice in the future. In the DDP solver, I do the integration manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we also need set_normalized_coefficient
. These may be simple copy-paste from JuMP.jl. Let's try to get these methods working in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. If you want me to try to add those, I can do that on Monday. If you want to do it yourself, that's fine too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the relax_integrality
methods. Feel free to take a stab at set_normalized_coefficient
. JuMP has a good example implementation. You will want to call MOI.Modify
on each possible backend like we do when adding constraints.
|
@odow All of the private MOI methods should be gone with this PR. Let me know if you come across anything problematic. |
This is a mega PR that re-writes the core of Plasmo.jl to fully use the new JuMP nonlinear interface in addition to a refactor meant to make the package more graph-centric (versus the current v0.5 which is very node-centric).
The main theme of Plasmo.jl is now to treat an optigraph as an optimization problem made of nodes and edges that a user is free to modify, or generate a new optigraph using partitioning or querying the graph topology. The key change is to strictly use graphs as subproblems (as opposed to nodes) which standardizes potential solution approaches users might take (it is still possible to optimize individual nodes, but it creates a new graph internally that contains one node).
In the v0.5 version of Plasmo.jl, we use a
JuMP.Model
for each node which does not scale in cases where nodes contain few variables and constraints and the graph consists of thousands of nodes (e.g. for dynamic optimization problems with collocation methods). The new implementation makes nodes and edges more lightweight by having their model data stored in the graph they are created in. The v0.5 implementation is also quite hacky with how it achieved the integration ofJuMP.Model
objects and the optigraph. It is still possible to do things like set aJuMP.Model
to a node, but it copies the model data over; theJuMP.Model
is not modified by performing this operation. This also means users should use the optinode once they set aJuMP.Model
if they want to do further modifications to the graph.Other major changes include:
LinkConstraintRef
type to store linking constraints. Linking constraints are treated as standard MOI constraints that exist on edges.Short-term issues to address:
The Long-term Roadmap
GraphOptInterface
There are a couple directions we could take Plasmo.jl from here. I think developing GraphOptInterface.jl and using it to interface with MadNLP.jl to do Schur decomposition could be a useful start. I have always wanted to make a standard interface to DSPopt.jl and HiOp.jl although the packages do not seem to be currently maintained. We would also need to think about how to do distributed computing with GraphOptInterface.jl.
Distributed OptiGraphs
We could also develop distributed optigraphs that work the same way as normal optigraphs (possibly by parameterizing the optigraph with some new types). These graphs could support writing standard distributed algorithms using the same methods that exist on the current OptiGraph. The way we manage distributed structures could also use or mirror what we do in GraphOptInterface.jl.