-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Application Usage] Use Promise.allSettled
during rollups
#87675
[Application Usage] Use Promise.allSettled
during rollups
#87675
Conversation
await Promise.all( | ||
await Promise.allSettled( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this will no longer be throwing, should we retrieve the result of allSettled
to log the potential failures?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I've pushed an update to throw if we find that any promise was rejected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this happens very frequently when there's multiple nodes, and users should always just ignore this message when it happens we decided to lower the log level to debug.
@elasticmachine merge upstream |
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a problem with this algorithm: because we load the transactions before the dailyDoc, there's a race condition where some transactions will be counted twice. Anytime we are unable to delete a document it means that document was counted twice.
The simplest way around this would be to first delete the transactions and then only update the dailyDoc with the values of the transactions that were successfully deleted. We would then need to use incrementCounter to prevent version conflicts (or retry when a version conflict occurs until it succeeds). The downside of this algorithm is that we could loose data when a node is restarted after deleting transactions but before updating the daily doc. This is less likely than counting documents twice so maybe this is a better tradeoff for application usage?
The other option is to do something like:
- Load the dailyDoc
- Load all transactions for that day
- Update the dailyDoc
- Delete all transactions for that day
- process the next day
If we only rollup daily's for completed days (i.e. for days < today) then we can efficiently see which days have already been rolled up.
If we have saved object aggregations we could do it even faster but don't think we have a timeline for that #64002
@rudolf I agree it's a very valid concern. And it'd be ideal if we could run aggregations and However, I think we can patiently wait for those features to come because I believe the potential harm is very minor: the transactional documents are sent and stored from the browser every 3 minutes if the tab is on-screen (otherwise the browser itself sleeps the intervals). Failing to delete a document only affects the actions that could have happened during those 3 minutes. Not ideal, but in the overall stats, I think it has a minor effect. Also, I would expect that the issues might occur during the upgrade process from pre-7.9.0 only (that's when we have plenty of documents to rollup and delete). Once there, we attempt to rollup every 30 minutes. The number of documents accumulated in that period should be manageable. What do you think? |
💚 Build SucceededMetrics [docs]
History
To update your PR or re-run it, just comment with: |
…87675) Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Summary
Now that we are running Node v14, we can use
Promise.allSettled
instead ofPromise.all
during rollupsChecklist
Delete any items that are not applicable to this PR.
For maintainers