Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Estimations of the interpolation error #151

Open
devmotion opened this issue Aug 27, 2019 · 2 comments
Open

Estimations of the interpolation error #151

devmotion opened this issue Aug 27, 2019 · 2 comments

Comments

@devmotion
Copy link
Member

I would like to try some neat heuristic that's used in RADAR5 and that I assume could be helpful for non-smooth solutions such as the artificial example in RADAR5 due to Hairer and Guglielmi, which is mentioned as third example in https://www.sciencedirect.com/science/article/pii/S147466701736929X. I'm not able to solve this problem properly - I can compute a solution with TRBDF2, SDIRK2, or KenCarp4 that converges to the correct steady state but instead of the expected oscillating non-smooth behaviour DelayDiffEq just picks one branch. As Guglielmi writes, "given the non smoothness of the solution, the control of the error in the dense output is crucial". I think we might not perform well since we do not control the interpolation well enough.

I think, one limitation of the algorithm right now is that we (mainly) check the convergence of the fixed-point iteration and the local error estimates at times t + dt for accepting/rejecting the steps, but we would like to have an estimate for the error of the continuous interpolation as well. The main idea that's used in RADAR5 is to take two interpolations and to check how close they are. It's just a heuristic (if I remember correctly) but still it might help RADAR5 to outperform DelayDiffEq on this class of problems.

As far as I know, currently there's no easy way for using different interpolation methods in the same time interval. I assume that addressing this issue would (mainly) require upstream changes in OrdinaryDiffEq and DiffEqBase that introduce proper interpolation types that somehow can be used in a modular and exchangeable way.

@ChrisRackauckas
Copy link
Member

As far as I know, currently there's no easy way for using different interpolation methods in the same time interval. I assume that addressing this issue would (mainly) require upstream changes in OrdinaryDiffEq and DiffEqBase that introduce proper interpolation types that somehow can be used in a modular and exchangeable way.

Indeed that is probably what it would take.

@devmotion
Copy link
Member Author

Just copied from Slack for archiving purposes:

So, I think, both approaches would be useful and could be combined since they work differently. First we could base convergence/divergence on a residual of the same interpolation, to rule out that although we converged to u(t + dt) the interpolation is still fluctuating, and second we could combine the local error estimator from the ODE method with an error estimator of the dense output which would be based on comparing the interpolation that we use with a different one like, e.g., Hermite without considering the the state u(t) (otherwise for many methods that estimate would always be 0, since we just use Hermite).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants