You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We had an issue where we would have a component calling debounce multiple times with different wait times. Albeit a bit weird and (maybe) bad practice, it lead to some strange behaviour.
This test sums is up the problem that occurred:
QUnit.test('debounce with later', function(assert) {
let done = assert.async();
let bb = new Backburner(['batman']);
function debounceFunc(obj) {
assert.notOk(obj.isFoo, 'debounce called with foo');
assert.ok(obj.isBar, 'debounce called with bar');
}
function laterFunc() {
assert.ok(true, 'later called');
done();
}
const foo = { isFoo: true };
const bar = { isBar: true };
// set debounce with a 500ms wait, then a later and then a new debounce
// with a smaller wait (10ms)
// the debounce will replace the later and the later will never run
bb.debounce(debounceFunc, foo, 500);
bb.later(laterFunc, 100);
bb.debounce(debounceFunc, bar, 10);
});
The first debounce had a bigger wait than the second debounce, but between the two calls a later timer was added that had a smaller timeout than the first debounce. The result is that the latter debounce will be added to the timers queue not after the first debounce but before the first debounce. The following code which removes the first debounce from the timers queue caused a problem:
let index = findTimerItem(target, method, _timers);
...
let i = searchTimer(executeAt, _timers);
...
this._timers.splice(i, 0, executeAt, timerId, target, method, args, stack);
this._timers.splice(index, TIMERS_OFFSET);
The index is the position of the first debounce, the i is the index where the new debounce will be scheduled. When removing the first debounce from the timers queue the assumption is that i > index, but in our test case i < index because the executeAt is before the first debounce. Then this._timers.splice(index, TIMERS_OFFSET); will remove the wrong scheduled timer - in our test case the later.
I'm curious to see what you guys think about this. The multiple debounce calls are weird, but the behaviour that occurs is - in my opinion - extremely unwanted. In our case some pretty important function calls where disappearing and it took a while for us to discover the cause.
The text was updated successfully, but these errors were encountered:
I've hit this particular issue too. You've expressed clearly what I was seeing. In my case, I was working with quite a few independent addons all running debounce, throttle and later at their own discretion. I discovered the mutation in this._timers and as a result my task wasn't scheduled when it expired. I really hope someone sees this as it's an annoying issue.
We had an issue where we would have a component calling debounce multiple times with different wait times. Albeit a bit weird and (maybe) bad practice, it lead to some strange behaviour.
This test sums is up the problem that occurred:
The first debounce had a bigger wait than the second debounce, but between the two calls a later timer was added that had a smaller timeout than the first debounce. The result is that the latter debounce will be added to the timers queue not after the first debounce but before the first debounce. The following code which removes the first debounce from the timers queue caused a problem:
The
index
is the position of the first debounce, thei
is the index where the new debounce will be scheduled. When removing the first debounce from the timers queue the assumption is thati > index
, but in our test casei < index
because theexecuteAt
is before the first debounce. Thenthis._timers.splice(index, TIMERS_OFFSET);
will remove the wrong scheduled timer - in our test case the later.I'm curious to see what you guys think about this. The multiple debounce calls are weird, but the behaviour that occurs is - in my opinion - extremely unwanted. In our case some pretty important function calls where disappearing and it took a while for us to discover the cause.
The text was updated successfully, but these errors were encountered: