Acceleration of covariance-based analysis by Jacobian simplification #55
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Computation of covariance-based uncertainties requires the estimation of the Jacobian at the found solution. For non-linear parameters this has to be estimated numerically. Until now DeerLab was using the
numdifftools.Jacobian
function to do this.The functions does a series of finite difference-based calculations that yield very accurate Jacobians. This function, however, is very costly due to the large number of function evaluations, and has become a bottleneck for most DeerLab fit functions. For example, a simple 4-pulse DEER fit with
fitsignal
might take 8s to run, from which 7s correspond to the Jacobian estimation.To prevent this I have substituted the Jacobian estimation using
numdifftools
by a simple one-step finite-difference approximation. While it has the potential to be less accurate...numdifftools
results for DEER models.Therefore, I have removed the dependencies on
numdifftools
altogether and introduced the simpler version.This has led to runtime accelerations of the fit functions by factors x2-10. For example, the 4-pulse DEER fit which might have taken 8s now requires just 1s, with the NNLS problem being the bottleneck as should be.