From cd30f1d77019ddadc73842a1349d371f674ca22f Mon Sep 17 00:00:00 2001 From: Agnes Korcsak-Gorzo <40828647+akorgor@users.noreply.github.com> Date: Sat, 29 Jun 2024 08:49:51 +0200 Subject: [PATCH] Remove whitespace --- models/weight_optimizer.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/models/weight_optimizer.h b/models/weight_optimizer.h index bebcd3a38b..6ed5c47cd3 100644 --- a/models/weight_optimizer.h +++ b/models/weight_optimizer.h @@ -71,7 +71,7 @@ assumes :math:`\epsilon = \hat{\epsilon} \sqrt{ 1 - \beta_2^t }` to be constant. When `optimize_each_step` is set to `True`, the weights are optimized at every time step. If set to `False`, optimization occurs once per spike, resulting in a significant speed-up. For gradient descent, both settings yield the same -results under exact arithmetic; however, small numerical differences may be +results under exact arithmetic; however, small numerical differences may be observed due to floating point precision. For the Adam optimizer, only setting `optimize_each_step` to `True` precisely implements the algorithm as described in [2]_. The impact of this setting on learning performance may vary depending