Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs: "Using JuMP with DiffEqParamEstim" #187

Closed
fleimgruber opened this issue Nov 22, 2022 · 8 comments
Closed

Update docs: "Using JuMP with DiffEqParamEstim" #187

fleimgruber opened this issue Nov 22, 2022 · 8 comments

Comments

@fleimgruber
Copy link

fleimgruber commented Nov 22, 2022

In https://github.com/SciML/DiffEqParamEstim.jl/blob/da13a16b350679668cd8bb050990a95f8cb2abc9/docs/src/tutorials/jump.md the JuMP API should be updated for newer JuMP versions. I was not able to make the example work using JuMP 1.1.1 - e.g. the setsolver function is not defined.

I tried to adapt the example and came up with these changes (one snippet to experiment better than in the Markdown file). Notable changes: uses set_optimizer_attribute instead of setsolver and all_variables instead of .colNames. The problem remains that the variables σ, ρ, β remain at their start values, i.e. are not changed in the optimization. Am I missing something?

using OrdinaryDiffEq, DiffEqParamEstim, JuMP, NLopt, Plots

function g(du,u,p,t)
  σ,ρ,β = p
  x,y,z = u
  du[1] = dx = σ*(y-x)
  du[2] = dy = x*-z) - y
  du[3] = dz = x*y - β*z
end

u0 = [1.0;0.0;0.0]
t = 0.0:0.01:1.0
tspan = (0.0,1.0)
model_ode(p_) = ODEProblem(g, u0, tspan,p_)
solve_model(mp_) = OrdinaryDiffEq.solve(model_ode(mp_), Tsit5(),saveat=0.01)
mock_data = Array(solve_model([10.0,28.0,8/3]))

loss_objective(mp_, dat) = build_loss_objective(model_ode(mp_), Tsit5(), L2Loss(t,dat))

juobj(args...) = loss_objective(args, mock_data)(args)
jumodel = Model(NLopt.Optimizer)
set_optimizer_attribute(jumodel, "algorithm", :LD_MMA)
JuMP.register(jumodel, :juobj, 3, juobj, autodiff=true)
@variables jumodel begin
    σ,(start=8)
    ρ,(start=25.0)
    β,(start=10/3)
end
@NLobjective(jumodel, Min, juobj(σ, ρ, β))

JuMP.optimize!(jumodel)
best_mp = value.([jumodel[s] for s in Symbol.(all_variables(jumodel))])

sol = OrdinaryDiffEq.solve(best_mp |> model_ode, Tsit5())
plot(getindex.(sol.(t),1))
scatter!(mock_data[1, :], markersize=2)
@ChrisRackauckas
Copy link
Member

Yes it's probably for a much older version of JuMP. It would be good to get some help updating that, and setting it to an @example so it gets tested by CI.

@ChrisRackauckas
Copy link
Member

Or, what's more likely is an update to this library which simply outputs OptimizationProblems. That would probably be the sanest way to do this.

@fleimgruber
Copy link
Author

fleimgruber commented Nov 26, 2022

Thanks for your thoughts and suggestions!

Yes it's probably for a much older version of JuMP. It would be good to get some help updating that, and setting it to an @example so it gets tested by CI.

I feel like I could tackle this as I figured out the JuMP API changes already - the one missing bit is the unexpected optimization result: the variables still have their start values - but let's look at it in the corresponding PR.

Or, what's more likely is an update to this library which simply outputs OptimizationProblems. That would probably be the sanest way to do this.

In this case I think a dev should come up with that DiffEqParamEstim API change. In the meantime, changing the docs to @example for now would still be good and could be adapted to the new DiffEqParamEstim API later. Also, just curious: is there an actual type you referred to by "OptimizationProblems" or just a generic term for such types?

@Vaibhavdixit02
Copy link
Member

Also, just curious: is there an actual type you referred to by "OptimizationProblems" or just a generic term for such types

It's from the Optimization.jl interface

@ChrisRackauckas
Copy link
Member

This tutorial is now going away due to the revamp to the Optimization.jl interface. The "similar" piece is to use OptimziationMOI.

@fleimgruber
Copy link
Author

@ChrisRackauckas Thanks for the update and your efforts! I could try my hands at an equivalent tutorial using the new interface if needed. Looking at the tests:

which one should go into a tutorial if any?

@ChrisRackauckas
Copy link
Member

The lorenz_test is a fine one, though I'd say writing it from LV is better because Lorenz is weird and chaotic. If you do Lorenz, it should be done with multiple shooting.

@ChrisRackauckas
Copy link
Member

The JuMP docs were removed because it's all Optimization.jl now. But we should probably revive something around nonlinear constraints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants