Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build loss function for ensemble of ODE's, but with different sampling times for each simulation in the ensemble #218

Closed
TorkelE opened this issue Mar 17, 2023 · 5 comments

Comments

@TorkelE
Copy link
Member

TorkelE commented Mar 17, 2023

I have a system (modelled as an ODE), for which I have measurements using different initial conditions. I want to find its parameters. I have looked at this example: https://docs.sciml.ai/DiffEqParamEstim/stable/tutorials/ensemble/ and it is pretty much just what I want to do. However, there's one problem:

The various experiments are not sampled at the same timepoints. How do I tune this? In the example, in:

obj = build_loss_objective(enprob,Tsit5(),loss,Optimization.AutoForwardDiff(),trajectories=N,
                           saveat=data_times)

we build the loss function, but also set the options for the ODE solver (saveat=data_times). Here, I would need saveat=data_times to have different values for each run of the ensemble. Is there a good way to do this?

(I have tried setting up my own, which just builds several non-ensemble loss problems, and then sum them all up. However, AD does not work on this, and it generally does not seem to work well)

@Vaibhavdixit02
Copy link
Member

Since you can't provide separate saveats in the solve for EnsembleProblem, unless I am wrong, I don't think this would be possible with this.

What issue were you facing with the separate solves and summing approach?

@TorkelE
Copy link
Member Author

TorkelE commented Mar 17, 2023

Currently, I do something like this:

function make_optimization_problem(m:Model exps::Vector{Experiments})
    cost_functions = [get_cost_function(m, exp) for exp in exps]
    function cost_function(u,p)
        sum(cost_function(u) for cost_function in cost_functions)
    end
    lb = get_lb(m)
    ub = get_ub(m)
    return Optimization.OptimizationProblem(cost_function, init_p(m); lb=lb, ub=ub)
end

however, for a starter I get a ERROR: Use OptimizationFunction to pass the derivatives or automatically generate them with one of the autodiff backends error when I try to run solve using BFGS(), so I presume some AD stuff don't work through this.

Also, the results of the optimisations are really pad (while optimising on only a single experiment work). Might just be a natural thing, but ass a step in figuring out why I figured I probably should avoid adding as much custom code as possible and just use the standard SciML tools, where possible.

@Vaibhavdixit02
Copy link
Member

You haven't created an OptimizationFunction to pass to OptimizationProblem there that's what the error is saying, which is necessary for the AD stuff to work

@Vaibhavdixit02
Copy link
Member

function make_optimization_problem(m:Model exps::Vector{Experiments})
    cost_functions = [get_cost_function(m, exp) for exp in exps]
    function cost_function(u,p)
        sum(cost_function(u) for cost_function in cost_functions)
    end
    lb = get_lb(m)
    ub = get_ub(m)
    optf = OptimizationFunction(cost_function, AutoForwardDiff())
    return Optimization.OptimizationProblem(optf, init_p(m); lb=lb, ub=ub)
end

@TorkelE
Copy link
Member Author

TorkelE commented Mar 17, 2023

Thanks, I'll try this an it hopefully should work :)

@TorkelE TorkelE closed this as completed Mar 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants