-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[$250] Chat - Group is created with duplicate user when chat is started quickly on a slow network #49931
Comments
Triggered auto assignment to @RachCHopkins ( |
Please clarify the "actual result" |
Waiting on a response/clarification from QA team. |
team is still able to reproduce group.chat.mp4 |
Ok, to sum up, it can be reproduced and we don't know what/who the third user is. |
Job added to Upwork: https://www.upwork.com/jobs/~021843049310841478139 |
Triggered auto assignment to Contributor-plus team member for initial proposal review - @parasharrajat ( |
Edited by proposal-police: This proposal was edited at 2024-10-07T12:12:00Z. ProposalPlease re-state the problem that we are trying to solve in this issue.After sending the message, the chat turns into a group with duplicate users. It only corrects itself back to a single chat after switching to another chat and then returning to it What is the root cause of that problem?When App/src/libs/actions/QueuedOnyxUpdates.ts Lines 11 to 15 in 04214cd
We've had the logic to flush the queue after all requests are complete here App/src/libs/Network/SequentialQueue.ts Lines 172 to 174 in 04214cd
But in this case after App/src/libs/actions/OnyxUpdates.ts Lines 147 to 153 in 04214cd
After What changes do you think we should make in order to solve the problem?
App/src/libs/Network/SequentialQueue.ts Line 192 in 04214cd
What alternative solutions did you explore? (Optional) |
ProposalPlease re-state the problem that we are trying to solve in this issue.On slow networks, if we create a report and comment on it, a group with a duplicate chat is created. What is the root cause of that problem?For write requests, we do not write the response onyxData and the corresponding successData/failureData/finallyData to Onyx directly. Instead, we write them to queuedOnyxUpdates to prevent the replay effect. App/src/libs/actions/OnyxUpdates.ts Lines 30 to 32 in 78e484f
These updates are flushed only after all the PersistedRequests are cleared. App/src/libs/Network/SequentialQueue.ts Lines 173 to 175 in 78e484f
We remove a request from persistent requests only after the response is received and processed with all middlewares. App/src/libs/Network/SequentialQueue.ts Lines 90 to 99 in 78e484f
So, on slow networks, when we call OpenReport, its response and corresponding request’s successData, etc., are added to queuedOnyxUpdates. This cannot be flushed immediately because there was already an AddComment request added to the PersistedRequests. For some reason that needs to be fixed, the backend sends a previousUpdateID for AddComment that is larger than the previous lastUpdateID/lastClientUpdateID. This makes doesClientNeedToBeUpdated true.
As a result, the AddComment response is not processed directly with OnyxUpdates.apply. It sets this AddComment response to saveUpdateInformation, and the queue is paused. When the queue is paused, the updates waiting in queuedUpdates are not flushed, so OpenReport’s successData waits in the queuedUpdates, and the optimistic user detail is not removed, making the report appear like a group report. What changes do you think we should make in order to solve the problem?
DeferredOnyxUpdates.getMissingOnyxUpdatesQueryPromise()?.finally(finalizeUpdatesAndResumeQueue).then(() => QueuedOnyxUpdates.flushQueue()); What alternative solutions did you explore? (Optional) |
@parasharrajat, @RachCHopkins Whoops! This issue is 2 days overdue. Let's get this updated quick! |
@parasharrajat do you like any of the proposals here? |
📣 It's been a week! Do we have any satisfactory proposals yet? Do we need to adjust the bounty for this issue? 💸 |
@RachCHopkins I won't be available from 16 Oct for a few days, please reassign. |
I can take over this issue as C+ |
@parasharrajat @RachCHopkins this issue was created 2 weeks ago. Are we close to approving a proposal? If not, what's blocking us from getting this issue assigned? Don't hesitate to create a thread in #expensify-open-source to align faster in real time. Thanks! |
The latest conversation is here: https://github.com/Expensify/App/pull/51712/files#r1842905572 The backend work is still not done, it ended up being more work to fix than expected. I expect that the PR should be ready for review this week. Considering the number of cases we have to update, I'm planning to fix the cases that are known to be causing problems first (i.e. |
I'm making the backend PR ready for review: https://github.com/Expensify/Auth/pull/13168 |
@aldo-expensify @RachCHopkins This issue has already been fixed Screen.Recording.2024-12-09.at.14.42.48.movcc @nkdengineer |
Other issues that have the same RCA have also been fixed |
@nkdengineer This PR seems unnecessary. I think we can close it |
I closed the PR this morning. I think this issue can be resolved, but @nkdengineer did a good amount of work on this, so I think the bounty should be reduced by 50% to $125 and still pay out for this @RachCHopkins |
Sounds good to me @tgolen ! |
Payment Summary:
Upwork job here |
Contributor has been paid, the contract has been completed, and the Upwork post has been closed. |
@RachCHopkins As from our guideline, the contributor should get full compensation in this case. Could you please evaluate it again? |
Reopening so this gets an answer: #49931 (comment) |
Sorry @aldo-expensify @DylanDylann @tgolen I'm not understanding who should be paid what here. @tgolen notes above that it should be a 50% payment because some work was done, but never implemented. @DylanDylann has linked to a post that says that when someone has a solution which is implemented but later reverted and then not used moving forward due to internal Expensify BE work not being complete, they should get a full payment. I see in this case that @nkdengineer's proposal was accepted, but not implemented. Which is different to the case discussed above (but please correct me if I'm wrong), and this whole issue was deemed unnecessary because everything was fixed elsewhere. |
@RachCHopkins From https://expensify.slack.com/archives/C02NK2DQWUX/p1723055151068559?thread_ts=1723015261.405329&cid=C02NK2DQWUX, @mallenexpensify said that:
In this issue, contributors are already hired and the PR is also ready (almost work has already been done). But the deliverable has changed because we decided to fix it on the BE. So I think the contributors should get fully paid Anyway, I will only note that in case we overlook the BZ SO. If the down price is intentional, let's close this one. |
I'm just the girl with the checkbook here, so will wait on @tgolen and @aldo-expensify |
OK, I didn't know about that SO. The policy is clear in the SO, so let's just follow it and pay out 100%. |
@DylanDylann thanks for the link, tag and quote text. That might be a lil outdated. Or, the below, in the main payment SO, might be more updated.
Reasoning.. I don't like the idea that someone could be hired, do no work on the PR then get full payment because something happened between the time they were hired and they submitted their PR, we don't want to encourage people to move slowly with a possible payout. (that def doesn't seem to be what happened here, just providing an example for the process). Based on the above process, it appears 100% compensation is due based on @nkdengineer 's PR. |
This is a struggle with many challenges while working on it. In this issue, I also worked on reviewing the PR when it was created and discovered another problem that started a long discussion and end up with the fix on the BE side. After the fix is deployed I also help to verify all related issues are fixed I raised this concern because I don't know the exact process in this case. @mallenexpensify Thanks for the official information. |
@DylanDylann please always reference past posts and details. We appreciate the help and the all the invested time. My goal is to keep processes up to date, to make them fair and to discuss with contributors if they don't agree. |
Ok, so I paid out the Contributor and C+ $125 each here. So it looks like I need to go back and do some remedial work as far as payments go. I shall work out the best way to do so! |
Sorry for all the confusion @DylanDylann and @nkdengineer - I have done a bonus payment for each of you on the original contracts, to shore up the balance! |
If you haven’t already, check out our contributing guidelines for onboarding and email contributors@expensify.com to request to join our Slack channel!
Version Number: 9.0.41-2
Reproducible in staging?: Y
Reproducible in production?: Y
If this was caught during regression testing, add the test name, ID and link from TestRail: N/A
Email or phone of affected tester (no customers): biruknew45+1195@gmail.com
Issue reported by: Applause - Internal Team
Action Performed:
Expected Result:
The chat with the selected user should remain a 1-on-1 chat. The chat should not change into a group with duplicate users
Actual Result:
After sending the message, the chat turns into a group with duplicate users. It only corrects itself back to a single chat after switching to another chat and then returning to it
Workaround:
Unknown
Platforms:
Which of our officially supported platforms is this issue occurring on?
Screenshots/Videos
Add any screenshot/video evidence
Bug6619285_1727614560806.1.mp4
View all open jobs on GitHub
Upwork Automation - Do Not Edit
Issue Owner
Current Issue Owner: @RachCHopkinsThe text was updated successfully, but these errors were encountered: