-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[$500] [HOLD for payment 2023-12-20] The whole queue of offline requests are processed when you come back online, delaying online requests #28172
Comments
If we wanted to take it a step further, I have another idea building off an idea I've proposed before in a different context... Solution: Replace the sequential queue with a directed acyclic graph (DAG) of requests, which we can call the
This means that instead of processing write requests one-by-one in the sequential queue, we can process non-conflicting requests in parallel and only have to process requests that depend on other parent requests to run in sequence. This could massively improve the speed at which we process requests made offline, especially on a slower network.
It's worth nothing that this would be a pretty big change, so if we were going to roll this out I think the prudent thing to do would be to make the default behavior of the flowchart LR
subgraph first requests
A(Request A)
end
subgraph second requests
B(Request B)
end
subgraph third requests
C(Request C)
end
subgraph fourth requests
D(Request D)
end
subgraph fifth requests
E(Request E)
end
subgraph sixth requests
F(Request F)
end
A --> B --> C --> D --> E --> F
Only when we're certain which flowchart LR
subgraph first requests
A(Request A)
end
subgraph second requests
B(Request B)
D(Request D)
end
subgraph third requests
C(Request C)
end
subgraph fourth requests
E(Request E)
end
subgraph fifth requests
F(Request F)
end
A --> B --> C
A --> D
C --> E --> F
If we know that a request is independent of any other requests in the graph, we can provide an empty array for flowchart LR
subgraph first requests
A(Request A)
F(Request F)
end
subgraph second requests
B(Request B)
D(Request D)
end
subgraph third requests
C(Request C)
end
subgraph fourth requests
E(Request E)
end
A --> B --> C
A --> D
C --> E
|
@muttmuure Can you assign me and @kosmydel to this issue? |
📣 @WojtekBoman! 📣
|
Comment to enable the assignment :) |
@roryabraham I have one question about the first proposed approach to solve this problem. Is the reportId parameter sufficient to make requests idempotent? I checked params of this request and besides the reportId it also accepts params: emailList, accountIDList etc. In the first approach if we send OpenReport request twice with the same reportID but with different additional parameters, we will get response only from the last sent request. Is this okay? I'd like to be sure that this how it should work. |
It depends on the request. Each request may have it's own
I think that is indeed ok for |
@mountiny made a great point here that if we make a report optimistically So maybe what we want to do is – instead of just throwing away the earlier request and only keeping the later one, we should:
|
@roryabraham Okay so I would like to define a strategy for merging data from old and new OpenReport requests. For
|
Hey @roryabraham. Together with Wojciech Boman we have another set of questions about GraphQueue implementation. General questions:
Idempotency questions: |
Yes, I think the merged request should have the value for that param defined. Basically just use
Possibly – we have to think about and test it carefully. Of course, we should make this feature opt-in by just providing an |
I can't think of any cases when this would happen, but it seems likely that there may be such a case. I don't think there's much complexity that will be added by having |
One case when it would be helpful is already solved in another way by the
This could be a case when |
I suppose it would be returned from |
If you are the assigned CME please investigate whether the linked PR caused a regression and leave a comment with the results. If a regression has occurred and you are the assigned CM follow the instructions here. If this regression could have been avoided please consider also proposing a recommendation to the PR checklist so that we can avoid it in the future. |
If you are the assigned CME please investigate whether the linked PR caused a regression and leave a comment with the results. If a regression has occurred and you are the assigned CM follow the instructions here. If this regression could have been avoided please consider also proposing a recommendation to the PR checklist so that we can avoid it in the future. |
If you are the assigned CME please investigate whether the linked PR caused a regression and leave a comment with the results. If a regression has occurred and you are the assigned CM follow the instructions here. If this regression could have been avoided please consider also proposing a recommendation to the PR checklist so that we can avoid it in the future. |
|
The solution for this issue has been 🚀 deployed to production 🚀 in version 1.4.11-25 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue: If no regressions arise, payment will be issued on 2023-12-20. 🎊 After the hold period is over and BZ checklist items are completed, please complete any of the applicable payments for this issue, and check them off once done.
For reference, here are some details about the assignees on this issue:
|
New PR is on prod: #32246 |
So I believe C+ payment is due here to @alitoshmatov |
My mistake @alitoshmatov, just realized that we don't have anyone assigned to this issue to help issue payment for the C+ review of #32246. Let's get that sorted... |
Triggered auto assignment to @sakluger ( |
@sakluger only action-item for you here is to issue a standard C+ review payment to @alitoshmatov for #32246. Thanks! |
Job added to Upwork: https://www.upwork.com/jobs/~01697b67b9aa3280da |
Current assignee @alitoshmatov is eligible for the External assigner, not assigning anyone new. |
@alitoshmatov I sent you an offer on Upwork, please let me know once you've accepted. Thanks! |
@sakluger Accepted the offer |
Completed payment, thanks! |
https://expensify.slack.com/archives/C05LX9D6E07/p1695105894373289
Problem
When we're offline, we'll build up a ton of requests if we keep navigating. OpenReport is probably the biggest offender. That's bad because we'll process the whole queue once we're back online, and because of that requests that I actually need to execute when I'm online will have to wait until the whole queue is processed.
Let's say you have navigated to 5 reports while offline.
When you go online:
We'll start processing the queue.
Open report usually takes a couple seconds per request, so we'll wait 10s until you do all requests
Because we also queue the onyxUpdates for write requests, we'll need for the write operation for all of them to finish before we actually see anything in the report we just navigated
Solution
Only call OpenReport exactly once per report
Use a report metadata key to store if we have ever called the OpenReport for a specific report
Issue Owner
Current Issue Owner: @saklugerUpwork Automation - Do Not Edit
The text was updated successfully, but these errors were encountered: