Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some ideas #68

Open
devmotion opened this issue Apr 17, 2018 · 6 comments
Open

Some ideas #68

devmotion opened this issue Apr 17, 2018 · 6 comments

Comments

@devmotion
Copy link
Member

In my opinion, DelayDiffEq is a lot more complicated than it should be, which also makes analysis and fixes more difficult. Hence I started to think about how the current situation could be improved.

Status quo

Currently, when initializing a new DDEIntegrator an ODEIntegrator is created (but not initialized) whose function, however, can never be evaluated since it has the wrong number of arguments. Hence in a quite confusing way it is wrapped, together with its solution, in a HistoryFunction, which itself is wrapped in a new differential equation of the form needed for OrdinaryDiffEq. Based on this, a DDEIntegrator is defined that duplicates all fields of an ODEIntegrator and uses this closure for function evaluations; hence OrdinaryDiffEq can be used for integration; in a quite advanced way the initial ODEIntegrator is updated to ensure a correct history function. Moreover, to reduce memory the caches of this ODEIntegrator are transferred to the DDEIntegrator, and the solution of the integrator is shortened already during integration if possible.

Ideas

  • Create one fully functional ODEIntegrator: This is currently not possible since the DDE function depends on argument h as well which itself depends on the interpolation of the ODEIntegrator that does not exist yet; possible solution:
    • Wrap h in a HistoryFunction that can be called as (::HistoryFunction)(integrator, t, ::Val{deriv}; idxs = nothing) (and so on):
      1. does not require the non-existing integrator at creation time
      2. requires the possibility to pass integrator instead of parameters p during integration as it was discussed already when the breaking changes were introduced in January
      3. passing integrator to the wrapper would still allow to only pass integrator.p to the wrapped initial function h
    • Wrap f and this HistoryFunction (probably together with analytic solution, gradient, Jacobian etc.) in a DDEFunction <: AbstractDiffEqFunction
      1. allows to define (::DDEFunction)(du, u, integrator, t) (and so on), as required by OrdinaryDiffEq
      2. requires to pass integrator instead of parameters p
      3. would still allow to pass only integrator.p to f
      4. would group all different functions such as f and h that belong together (e.g. usually the analytic solution depends on both)
  • Wrap it inside a DDEIntegrator which contains no duplicate fields:
    • currently almost every update of ODEIntegrator breaks DDEIntegrator
  • Use ODEIntegrator for integration and DDEIntegrator mostly for its control, i.e. for resets and fixed-point iteration:
    • from previous experiences, the most difficult part would be to implement intra-/extrapolations and fixed-point iterations correctly
  • Add possibility to only save dense output of certain indices:
    • currently a very complicated algorithm exists to reduce the solution in special cases
    • based on the implementation of RADAR5, one could (alternatively) save dense output only for certain indices (but still save solution at all or given time points for all or given indices)
    • would be simpler probably, and would also work for state-dependent delays
@ChrisRackauckas
Copy link
Member

The biggest issue, the reason for the two integrators, is because during the iterations it will need to be disconnected so that way you're building the new interval solution while using the old interpolation, and iteratively contracting based on full interval interpolations. I don't think that you can refine that step-by-step since otherwise the interpolation may oscillate.

@devmotion
Copy link
Member Author

Yes, that's clearly the biggest issue, and maybe one has to adapt or change some of the ideas. Maybe this would work:

  • define DDEFunction as outlined above
  • define only one DDEIntegrator (and no ODEIntegrator)
    • would basically extend ODEIntegrator with additional fields for fixed-point iterations and a duplicate cache (in, more or less, the same way it is currently done) (by the way, maybe macros could ensure that DDEIntegrator is an extension of ODEIntegrator by adding those fields to the type definition)
    • before every iteration interpolation cache is adjusted
    • since there would be specific arrays, cache, etc. just for the interpolation and not a complete second integrator the setup is maybe also easier to understand - with different integrators it is not completely clear what's their purpose and in particular that most fields of ODEIntegrator are not used (or abused)
  • would still be interesting to restrict the dense solution to certain indices
    But maybe there are other problems with this approach...

@ChrisRackauckas
Copy link
Member

would still be interesting to restrict the dense solution to certain indices

That could be done by allowing a choice of indices for which the k is then copied over as. It just needs an option and a handling in savevalues!.

It sounds like we're heading down the same path we did before. I cannot say I enjoy the current code at all (this an MultiScaleArrays.jl are what I consider my nightmare ideas, but since they give so many nice features I can't get rid of them), but it's difficult to do much more. However, I would like to give the "integrator as parameters" approach a try, and it's one of the reasons for allowing that option. But it will take a significant refactor of the OrdinaryDiffEq.jl code to allow for it, and I will want to do it in StochasticDiffEq.jl as well if we can (and that will give us SDDEs!).

@ChrisRackauckas
Copy link
Member

If we wanted, we can make (du,u,integrator,t) -> f(du,u,integrator,p,t) or even build the history function there, so the interface is similar but using the OrdinaryDiffEq integrator passing instead of enclosing.

@devmotion
Copy link
Member Author

That could be done by allowing a choice of indices for which the k is then copied over as. It just needs an option and a handling in savevalues!.

I hope I can put together a PR in the next days/week (this feature has to be added to OrdinaryDiffEq though, it seems).

It sounds like we're heading down the same path we did before.

Yes, the revised idea is more similar to the current approach but it is still not the same. In my opinion, it would have some advantages, the biggest being that it avoids these multiple layers of closures and prevents any abuse of fields or side-effects that are currently possible within the interpolating ODEIntegrator.

However, I would like to give the "integrator as parameters" approach a try, and it's one of the reasons for allowing that option. But it will take a significant refactor of the OrdinaryDiffEq.jl code to allow for it, and I will want to do it in StochasticDiffEq.jl as well if we can (and that will give us SDDEs!).

Sure, so far this was just me dreaming about a more pleasant setup. I noticed that you mentioned this idea some time ago but that it's not implemented yet. At which level should one decide whether to call f with integrator or integrator.p? Just during every function evaluation? Just by creating (du,u,integrator,t) -> f(du,u,integrator.p,t) if the user does not specify a certain argument in solve?

If we wanted, we can make (du,u,integrator,t) -> f(du,u,integrator,p,t)

Instead of something like DDEFunction?

even build the history function there

Wouldn't this be quite some overhead if the history function is built during every function evaluation?

@ChrisRackauckas
Copy link
Member

I hope I can put together a PR in the next days/week (this feature has to be added to OrdinaryDiffEq though, it seems).

Yeah, it would have to go there first.

Wouldn't this be quite some overhead if the history function is built during every function evaluation?

Maybe. Depends on compiler optimizations.

At which level should one decide whether to call f with integrator or integrator.p? Just during every function evaluation? Just by creating (du,u,integrator,t) -> f(du,u,integrator.p,t) if the user does not specify a certain argument in solve?

I was just going to go into every perform_step! and have to make a choice based on an argument which would be a value-type in the integrator, but there may be a more elegant way to do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants