-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tracking] Metric - Report screen load time (Performance audit by Callstack) #34494
Comments
Tracking issue for the Part of #33070 |
FYI This issue has been edited to unify the 2 metrics which we thought were overlapping too much - load time in ms and FPS while loading are too close to each other and it only made it more unclear to have these posted separately. Our whole research on the analysis regarding Opening up a report will be posted under this one issue :) cc @mountiny |
Thank you, makes sense! |
A quick update - we're aiming to provide the analysis by the EOW so we can start picking action items for this scope. |
Update! The analysis will most probably drop tomorrow, it's now going through our internal review process :) |
Metric: Report screen load time This analysis was produced and authored by Jakub Romańczyk (@jbroma) and later coauthored by @adhorodyski. IntroductionIn our investigation, we utilized data measured during the previous phase where the majority of the workload was concentrated on the JavaScript (JS) thread, focusing primarily on this aspect. Utilizing Trace collection was done with help of the new set of markers placed across the code:
Please see this commit for the exact implementation. Detailed AnalysisWe've found the following hot areas during the direct analysis of traces:
On top of that, taking a more holistic approach we have also investigated the possibility of optimizing Onyx connections on the Baseline Hermes trace in the JSON format getOrderedReportIDsOverview and the lack of freezingWe've identified that the Each time The reason this happens in the first place is because of the lack of freezing on the As long as a part of the React tree is not in the view (and here it's the case for mobile devices), calculating things in the background makes no sense for the end user, possibly degrading the performance while interacting with the visible parts of the UI. It would not be a big deal with a lightweight tree, but in this case the heavy calculations are really impacting the performance and preventing them from running will vastly improve the user experience. The rest of the analysis on this function's implementation still holds and is incredibly important as it directly impacts all of the 3 Core Metrics, but in our opinion the most important step in adressing the long loading times of the BreakdownHaving this context, please see the below breakdown as this function can be divided into 4 main parts which show up in the flamegraph:
Cache lookupThis finding is the same as described by Kacper in his In essence, cache lookup requires The usefulness of the cache is almost non existent (please see the base trace) and it's removal should be considered to lower the execution time of FilteringIn short, this part of the function cannot be further optimised by nearly any margin as most of it represents the business logic for choosing the reports to display. As long as it all lives in the JS thread and cannot be offloaded to the SQL layer which is available underneath (more on this later), it can only be microoptimised and we decided not to spend more time on this aspect. For now, the bigger the dataset, the longer the
|
cc @mountiny, I know it's quite a write up 😅 Please feel free to bring in anyone to help us with picking the action items! Not all of them were already brought up in the different metrics so there's still some work to be done, but this marks the final stage of the Analysis phase :) Let me know if I can help you guys with anything in here, happy to answer any questions. |
@adhorodyski its great as always. I think the main points are share with other analysis too. Then other smaller improvements can follow up in smaller self contained PRs/ issues. Regarding the SQL I think it would be great if you could also maybe join the discussions with Margelo and the SQLite team so you can more closely follow what's up. Also feel free to use the SQLite team help to answer any of your questions. |
@mountiny of course, thank you so much! I'd probably just need your help with picking the right option for the slice/static/computed approach around the |
@adhorodyski thanks! looking into that, it feels like an edge case only for Group chats. Those might have custom names too in future so I think the best will be to use the Slice method so we ensure the names of the users are in the report name |
Alright, to sum this up:
When it comes to the Freezing mechanism, would you like us to look into fixing this afterwards? As mentioned, this can help by a ton and strip down the whole execution time of Also, I'll make sure to link the cc @mountiny |
@adhorodyski I think the freezing aspect should be looked at in parallel but with less urgency than the methods improvements |
Ok, in this case I think we'll be able to start this next week as the first PRs are already being set up. Thank you, that's exactly what we needed in this phase for this metric :) |
Assuming you're referring to this, I'd be a big fan of removing that, especially if it's not providing us much value. In my experience ad-hoc caches created for very specific use cases such as this one add a lot of complexity to the code and can lead to bugs (i.e: you add a new dependency but don't know you need to update the cache key). |
Here's an idea we've discussed a bit before... we have some business logic that's currently duplicated in the front-end and back-end:
Furthermore, problems can occur if the front-end and back-end get out of sync in their implementations of this logic. So what if we extract some of this core business logic into an open-source C++ library that can be used by both the front-end and the back-end? That could help:
Now, I recognize I'm making some assumptions here, but I feel like this would be more worth discussing than using SQLite in the front-end as a normal relational database, mostly because it would be a relatively easy step to take from where we are today, and could provide a lot of the same value, without requiring a massive rearchitecture of our front-end data layer. |
Regarding
I think a simple change could improve this significantly: diff --git a/src/libs/ReportUtils.ts b/src/libs/ReportUtils.ts
index ebde1b1bf8..01f56849a3 100644
--- a/src/libs/ReportUtils.ts
+++ b/src/libs/ReportUtils.ts
@@ -2545,10 +2545,17 @@ function getReportName(report: OnyxEntry<Report>, policy: OnyxEntry<Policy> = nu
// Not a room or PolicyExpenseChat, generate title from participants
const participantAccountIDs = report?.participantAccountIDs ?? [];
- const participantsWithoutCurrentUser = participantAccountIDs.filter((accountID) => accountID !== currentUserAccountID);
- const isMultipleParticipantReport = participantsWithoutCurrentUser.length > 1;
-
- return participantsWithoutCurrentUser.map((accountID) => getDisplayNameForParticipant(accountID, isMultipleParticipantReport)).join(', ');
+ const isMultipleParticipantReport = participantAccountIDs.length > 2;
+ const participantDisplayNames = [];
+ let i = 0;
+ while (i < participantAccountIDs.length && participantDisplayNames.length < 6) {
+ const accountID = participantAccountIDs[i];
+ if (accountID !== currentUserAccountID) {
+ participantDisplayNames.push(getDisplayNameForParticipant(accountID, isMultipleParticipantReport));
+ }
+ i++;
+ }
+ return participantDisplayNames.join(', ');
}
/**
|
Happy to remove any deprecated/unused fields 👍🏼 Would love it if we could catch this with static analysis too. |
Love it, easy change for a huge win. Another easy change we could make on top of that, coming from the docs for
|
@adhorodyski minor tip for presenting small code changes in a before/after format – you can use a diff:
|
Good catch on the duplicate |
This is actually not completely true – the second argument to a This is another case where we could stand to benefit from the (coming-soon) |
Sounds good 👍🏼
Also sounds good 👍🏼 |
Thank you for clarifying here! @jbroma I think we should once more look into this, might be helpful. |
Nice catch! Please see this other analysis by @hurali97 under this post for how much this improves the performance. |
|
The solution for this issue has been 🚀 deployed to production 🚀 in version 1.4.62-17 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue: If no regressions arise, payment will be issued on 2024-04-25. 🎊 For reference, here are some details about the assignees on this issue:
|
I think we will create new issues for the second autdit |
Metric: Report screen load time
Phase: Measurements
Commit hash: callstack-internal@f15ed75
Report screen load time:
Marker -
open_report
Description - duration between pressing a report list item on the LHN to the report page being fully interactive
FPS:
Workflow:
Average FPS: 59.37
Example FPS chart from flashlight measure:
Additional markers Avg CPU usage, AVG RAM usage:
Avg from all rounds -> CPU usage (%): 144.36
Avg from all rounds -> RAM usage (MB): 827.83
Example charts from flashlight measure:
Total CPU:
CPU per Thread:
RAM:
The text was updated successfully, but these errors were encountered: