Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pollInterval doesn't wait for a response before dispatching next call #531

Closed
mjadczak opened this issue Feb 10, 2020 · 6 comments · Fixed by #1374
Closed

pollInterval doesn't wait for a response before dispatching next call #531

mjadczak opened this issue Feb 10, 2020 · 6 comments · Fixed by #1374
Labels
future 🔮 An enhancement or feature proposal that will be addressed after the next release

Comments

@mjadczak
Copy link

I am using pollInterval with a short interval (~1s) to get a reasonably current "live view" of some data on the server (pending a better implementation with subscriptions). The other day the server experienced a slowdown in its DB access, causing it to take multiple seconds to respond to requests usually served in <50ms. I then found out that the pollInterval option dispatches a new request every interval, instead of waiting between successful requests, which ended up sending lots of requests to the already-under-load server. It may be that the previous request is "cancelled" when the new is dispatched, but the server is already doing the work for it.

Do you think that this is the correct behaviour, or should the interval be from the completion of one request to the start of the next request instead? I didn't look into it in-depth, but I think this behaviour could be achieved by changing switchMap into concatMap here. If this shouldn't be the default behaviour, would it make sense to at least make it an option?

@kitten
Copy link
Member

kitten commented Feb 11, 2020

Well we would have to change the semantics of pollInterval to then also limit all queries to a single response which may be a breaking change and which may have odd cache interactions.

In the case of cache-and-network for instance we can’t predict whether a query may yield one or two results.

@mjadczak
Copy link
Author

Does the stream (callbag) of responses not get closed when all possible responses have been returned? If not, then concatMap as suggested would not work anyway.

@kitten
Copy link
Member

kitten commented Feb 12, 2020

@mjadczak There is no “list of expected responses” we could get anywhere from zero to unlimited results for a query. That’s because the stream returns updates reacting to changes in the cache as well.

I’m thinking however whether we can make the polling a background side-effect 🤔

@mjadczak
Copy link
Author

Ah, I get you now. By "background side effect", do you mean that it would be as if something is calling the executeQuery function in the background on a timer? (And if that's the case, the promise returned could be used to throttle the dispatch of the next query?)

@kitten
Copy link
Member

kitten commented Feb 13, 2020

There's no exact timer, no, but it would probably work something like this:

    if (pollInterval) {
      return pipe(
        response$,
        mergeMap(result => {
          return merge([
            fromValue(result),
            pipe(response$, delay(pollInterval))
          ])
        })
      );
    }

That's only the gist of it though as this would only add the pollInterval once. I'd love to solve this more elegantly though so I'll think about the implementation more carefully so we don't make it unnecessarily complicated while avoiding this issue.

@kitten kitten added feature 🚀 future 🔮 An enhancement or feature proposal that will be addressed after the next release and removed discussion 👥 labels Mar 6, 2020
@peterdemartini
Copy link

Are there any updates on the progress of this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
future 🔮 An enhancement or feature proposal that will be addressed after the next release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants