Skip to content

How to check and improve optimization

Clemens Kreutz edited this page Apr 23, 2021 · 13 revisions

The challenge

Parameter optimization of ODE models is challenging because

  • there is no analytical solution of the ODEs, thus the objective function (and its derivatives) can only be evaluated numerically, i.e. with limited accuracy
  • the objective function depends non-linearly on the parameters
  • models might be large, i.e. have many parameters
  • the amount of available data and observables is typically restricted which yields non-identifiability
  • usually only rough knowledge about parameter ranges is available
  • additional numerical challenges like events have to be accounted for
  • constraints might be considered

It is therefore essential to ensure that optimization works reliably.

Guidelines for reliable optimization

  • Repeat optimization with different initial guesses, e.g. apply multistart-optimization with arFitLHS in order to guarantee that the optimization algorithms converges (at least locally), i.e. an optimum is found repeatedly. This also increases the chance of finding the global optimum, whereas a single start optimization can get stuck in a local optimum.
  • Optimize the parameters by default at the log-scale (indicated in D2D by ar.qLog10). This in particular concerns strictly positive parameters, e.g. parameters that represent initial values and rate constants. Analogously, initial guesses and priors should be drawn or defined at the log-scale.
  • Specify lower and upper bounds (indicated in D2D by ar.lb and ar.ub). This improves the convergence of lsqnonlin (the default optimization algorithm in D2D) and other optimization algorithms and prevents integration errors.
  • Allow several orders of magnitude, i.e. use non-restrictive lower and upper bounds for the parameters
  • Don't use finite differences for calculating derivatives or let the optimization algorithm do this internally. Instead, D2D exploits the so-called sensitivity equations.

Checks for reliable optimization

  • Convergence can be checked by multi-start optimization e.g. by arFitLHS or arLocalLHS.
  • Convergence problems can be usually seen in likelihood profiles, i.e. it should be checked whether the profiles are sufficiently smooth
  • Look at the output of the optimization algorithm during a fit. This can be done by the option ar.config.optim.Display = 'iter'. Convergence problems are indicated by several iterations without improvement and then sudden large improvements. Similary, small improvements over many (>100) interations indicate convergence issues.

Cause and cure of optimization problems

  • Michalis-Menten and Hill-equations at the right hand side of the ODEs are difficult to be optimized because the derivative is close to zero if the dynamics (x) if far from K_D. The K_D constants can be put manually in the same range as x.
  • Integration over long time intervals can cause problems. If the data convers a time interval [t1,t2] with t1>>0 then consider shifting the times to t1=0.
  • Think about the integration tolerances ar.config.atol and ar.config.rtol. Both tolerances are usually linked with an OR, i.e. integration is performed in a way that either the absolute or relative tolerance is satisfied. arCheckODETolerances indicates how the residuals, the objective function etc changes if the tolerances are altered. Tolerances too small cause integration problems and tolerances too large decrease the performance of optimization. See the article Integration tolerances and modification possibilities for more details.
  • If states, inputs or parameters become negative and this is not intended, integration and optimization usally breaks down. Negative states due to integration errors are by default prevented by ar.model.qPositiveX(:)=1. Negative values for splines can be enforced by using strictly positive splines (spline_pos3, spline_pos4, spline_pos5, spline_pos10, see arInputFunctionsC.c)
  • Because of the curse of dimensionality, parameter spaces can only be very sparsely sampled. Thus, for models with many parameters arFitLHS might require large sample sizes. Hence, it might be inefficient to run global multi-start optimization e.g. via arFitLHS "every time". Try to combine it with arLocalLHS.
  • pleIterativeUntilConvergence performs profile likelihood calculation until no better optimum is found during the calculation of the profiles.
  • If error parameters are fitted, try to prevent that an error is fitted close to zero.
  • Check initial conditions and exploit steady-state equations where it is possible.
  • Consider using arLoadPars to not lose your last best fit.
  • Sometimes the model trajectory has an extremely fast dynamic that looks like steps or peaks. Such systems are difficult to be intergrated numerically. Is this behavior intended? If the time points of such immediate changes are known, adding events might be an option (see arAddEvent). Restricting the parameters to reasonable ranges or adding weak priors (for parameters or derived variables) might be an option. If the time point is only roughly known, one can add an auxiliary state that oscillates or has a sigmoidal shape and indirectly prevents too large steps of the integrator at the problematic time interval.
  • Prevent too long time intervals. In D2D we usually choose time units in a way that observation times are between 1 and <1000. The problem of longer time intervals is related to the initialization of the step size of integration. So if longer time intervals are required one has to address the issue at this level.

Literature

Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

Clone this wiki locally