Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debounced results don't load for queries after focus is lost and regained #844

Closed
isaacrlee opened this issue Dec 15, 2021 · 8 comments · Fixed by #864
Closed

Debounced results don't load for queries after focus is lost and regained #844

isaacrlee opened this issue Dec 15, 2021 · 8 comments · Fixed by #864

Comments

@isaacrlee
Copy link

Description

When using autocomplete with debouncing example code from https://www.algolia.com/doc/ui-libraries/autocomplete/guides/debouncing-sources/, and debug set to false, results do not load for queries when focus is lost and regained.

This issue does not appear to occur in 1.5.0 so I assume it's either related to the concurrency changes in 1.5.1, or the debouncing example needs updates.

This does not appear to occur when debug is set to true.

Reproduction

Preview →

Steps

  1. Open the sandbox above
  2. Click on the search box
  3. Enter a query (e.g. "Cooktop"), results will load, do not actually select any of the results
  4. Click elsewhere on the page so that the search box loses focus and the panel closes
  5. Click on the search box to refocus
  6. Enter a new query (e.g. "Macbook")
  7. You will see that results do not load for the new query

Expected behavior

Search results for queries will load when focus is lost and regained.

Environment

  • OS: Windows
  • Browser: Edge
  • Autocomplete version: 1.5.1
@sarahdayan
Copy link
Member

Hello @isaacrlee! This indeed looks like a regression from our latest patch, I'm on it.

@sarahdayan
Copy link
Member

sarahdayan commented Dec 16, 2021

@isaacrlee The problem occurs because we're monitoring whether there are pending promises in our logic, which doesn't work well with how debouncePromise works.

In debouncePromise, we always return a promise but we debounce its resolution (calling resolve). When you type "mac" quickly, three promises are created, but only one resolves—the two others will remain pending forever.

This highlights a structural issue in our request management logic, and we already have ideas on how to fix them more globally—we'll schedule it as soon as possible. In the meantime, you can work around the problem by making sure all created promises resolve. Here's a possible implementation:

function debouncePromise(fn, time) {
  let timerId = null;
  let resolveFns = [];

  return function debounced(...args) {
    clearTimeout(timerId);

    timerId = setTimeout(() => {
      resolveFns.forEach((resolve) => resolve(fn(...args)));
      resolveFns = [];
    }, time);

    return new Promise((resolve) => resolveFns.push(resolve));
  };
}

Note that it will only work with Algolia sources for now, as it relies on the search client's cache mechanism. If you need to debounce sources that use other APIs, I recommend adding a cache.

If this doesn't work for you, I recommend downgrading to 1.5.0 until we take care of this.

@isaacrlee
Copy link
Author

@sarahdayan Thanks for the explanation!

I see that the unresolved promises are resolved with the arguments of the "last" promise, which I assume the Algolia client handles so that additional requests aren't made. In my actual usage, I'm making a fetch to a different backend API, though it seems like the browser handles the additional duplicate requests well by serving from disk cache. I think I would rather live with the slow promise edge case of v1.5.0 than the duplicate requests but I think the above is a workable mitigation for Algolia sources.

Looking forward to the improvements coming, thanks for the quick response and support.

@sarahdayan
Copy link
Member

sarahdayan commented Dec 16, 2021

@isaacrlee Yes, the Algolia client has an internal cache so you're covered when using Algolia sources (see example; check network requests).

If you need to call other APIs, instead of calling fetch directly, you could write a thin wrapper that stores API results in an in-memory, key-based cache. You could eagerly add keys to the cache as soon as a request is made to make sure you don't have duplicates until the initial one produces results, then update the entry with data once it does. Next requests would read from the cache directly instead of triggering a request.

If this isn't possible, then going back to 1.5.0 should be safe—minus the UI edge case that 1.5.1 fixed.

We'll schedule this as soon as we can, but likely not before early January. Thanks for bearing with us, we'll keep you updated here.

@sarahdayan
Copy link
Member

Hey @isaacrlee, thanks for your patience. We have an open PR that should fix the issue and allow you to go back to the original debounce solution.

Feel free to try it out using CodeSandbox's build. From your reproduction sandbox with the new build, it seems to work fine.

@isaacrlee
Copy link
Author

@sarahdayan Thanks for the update! Agree that the issue is fixed in the repro sandbox, looking forward to the PR getting merged in.

@sarahdayan
Copy link
Member

Don't hesitate to subscribe to the PR to stay in the loop. I'll comment back here once the fix is released.

@sarahdayan
Copy link
Member

Hey @isaacrlee, the fix was released in 1.5.2 this morning ✨

Thanks for bearing with us!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants