Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using nonlinear expressions in multiple models? #473

Closed
rgiordan opened this issue Jun 26, 2015 · 4 comments
Closed

Using nonlinear expressions in multiple models? #473

rgiordan opened this issue Jun 26, 2015 · 4 comments
Labels
Category: Nonlinear Related to nonlinear programming wontfix

Comments

@rgiordan
Copy link

Is there any way to re-use nonlinear expressions between multiple models? As a simple (and kind of silly, included only for concreteness) example, suppose I had something like this problem:

using JuMP

m = Model()

n = 10
z = exp(rand(n))
@defVar(m, x)
@defVar(m, y[i=1:n]);

@defNLExpr(y_obj[i=1:n], -0.5 * (y[i] - x) ^ 2 - exp(y[i]) + y[i] * z[i]);
@defNLExpr(x_obj, 1 / (1 + x^2) );
@defNLExpr(obj, x_obj + sum{y_obj[i], i=1:n});

@setNLObjective(m, Max, obj)

Suppose, for example, that I wanted to optimize each y_obj separately given a value of x, but still wanted autodiff hessians for obj. I would want to share the expression y_obj[i=1:n] between the models used for optimzing and the model used to get the full Hessian.

I don't see how to do it since a variable must be bound to a single model. Is this possible in JuMP? Any recommendations for how to approach this use case?

(The example is deliberately oversimplified -- for a more complicated motivating example, you can consider a variational poisson glmm.)

@mlubin
Copy link
Member

mlubin commented Jun 26, 2015

Given the design of variables being tied to models, it's hard to see how this can easily work. I think Casadi enables you to mix and match pieces in this way. You basically have to pass values back and forth across the models, but there are probably way to wrap this up to make it less painful.

@rgiordan
Copy link
Author

What do you think about using the same model but creating different m_eval objects, like so:

@setNLObjective(m, Min, obj_1)
m_const_mat = deepcopy(JuMP.prepConstrMatrix(m));
m_eval1 = deepcopy(JuMP.JuMPNLPEvaluator(m, m_const_mat));
MathProgBase.initialize(m_eval1, [:ExprGraph, :Grad, :Hess])

@setNLObjective(m, Min, obj_2)
m_const_mat = deepcopy(JuMP.prepConstrMatrix(m));
m_eval2 = deepcopy(JuMP.JuMPNLPEvaluator(m, m_const_mat));
MathProgBase.initialize(m_eval2, [:ExprGraph, :Grad, :Hess])

I tried this out on a simple example and it seems to work in the sense of giving you objective values and first and second derivatives with respect to the different objective functions, but I don't know if this violates some hidden design decision I'm unaware of.

@mlubin
Copy link
Member

mlubin commented Jun 29, 2015

Sneaky. It looks like this particular case is safe at the moment, but I can't guarantee that it will be in the future. Some of the evaluation callbacks refer to the original model (e.g., for nonlinear constraints), but nonlinear objectives are a special case where everything needed is saved separately in the NLPEvaluator.

@IainNZ IainNZ added the Category: Nonlinear Related to nonlinear programming label Aug 11, 2015
@mlubin mlubin added the wontfix label Jan 2, 2016
@mlubin
Copy link
Member

mlubin commented Jan 2, 2016

After #638 expressions will be explicitly tied to models, so closing this as wontfix. The workaround is create multiple models and set the values of the variables as needed to compute the desired derivatives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category: Nonlinear Related to nonlinear programming wontfix
Development

No branches or pull requests

3 participants