-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integration: Fix testAddingUserNonFullyConnectedFederation and testNotificationsForOfflineBackends #3529
integration: Fix testAddingUserNonFullyConnectedFederation and testNotificationsForOfflineBackends #3529
Changes from all commits
863b383
25437f4
09558b7
9685456
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -22,9 +22,10 @@ testNotificationsForOfflineBackends :: HasCallStack => App () | |
testNotificationsForOfflineBackends = do | ||
resourcePool <- asks (.resourcePool) | ||
-- `delUser` will eventually get deleted. | ||
[delUser, otherUser] <- createAndConnectUsers [OwnDomain, OtherDomain] | ||
[delUser, otherUser, otherUser2] <- createAndConnectUsers [OwnDomain, OtherDomain, OtherDomain] | ||
delClient <- objId $ bindResponse (API.addClient delUser def) $ getJSON 201 | ||
otherClient <- objId $ bindResponse (API.addClient otherUser def) $ getJSON 201 | ||
otherClient2 <- objId $ bindResponse (API.addClient otherUser2 def) $ getJSON 201 | ||
|
||
-- We call it 'downBackend' because it is down for the most of this test | ||
-- except for setup and assertions. Perhaps there is a better name. | ||
|
@@ -36,18 +37,18 @@ testNotificationsForOfflineBackends = do | |
connectUsers delUser downUser1 | ||
connectUsers delUser downUser2 | ||
connectUsers otherUser downUser1 | ||
upBackendConv <- bindResponse (postConversation delUser (defProteus {qualifiedUsers = [otherUser, downUser1]})) $ getJSON 201 | ||
upBackendConv <- bindResponse (postConversation delUser (defProteus {qualifiedUsers = [otherUser, otherUser2, downUser1]})) $ getJSON 201 | ||
downBackendConv <- bindResponse (postConversation downUser1 (defProteus {qualifiedUsers = [otherUser, delUser]})) $ getJSON 201 | ||
pure (downUser1, downClient1, downUser2, upBackendConv, downBackendConv) | ||
|
||
-- Even when a participating backend is down, messages to conversations | ||
-- owned by other backends should go. | ||
successfulMsgForOtherUser <- mkProteusRecipient otherUser otherClient "success message for other user" | ||
successfulMsgForOtherUsers <- mkProteusRecipients otherUser [(otherUser, [otherClient]), (otherUser2, [otherClient2])] "success message for other user" | ||
successfulMsgForDownUser <- mkProteusRecipient downUser1 downClient1 "success message for down user" | ||
let successfulMsg = | ||
Proto.defMessage @Proto.QualifiedNewOtrMessage | ||
& #sender . Proto.client .~ (delClient ^?! hex) | ||
& #recipients .~ [successfulMsgForOtherUser, successfulMsgForDownUser] | ||
& #recipients .~ [successfulMsgForOtherUsers, successfulMsgForDownUser] | ||
& #reportAll .~ Proto.defMessage | ||
bindResponse (postProteusMessage delUser upBackendConv successfulMsg) assertSuccess | ||
|
||
|
@@ -68,12 +69,13 @@ testNotificationsForOfflineBackends = do | |
bindResponse (postConversation delUser (defProteus {qualifiedUsers = [otherUser, downUser1]})) $ \resp -> | ||
resp.status `shouldMatchInt` 533 | ||
|
||
-- Adding users to an up backend conversation should work even when one of | ||
-- the participating backends is down | ||
otherUser2 <- randomUser OtherDomain def | ||
connectUsers delUser otherUser2 | ||
bindResponse (addMembers delUser upBackendConv [otherUser2]) $ \resp -> | ||
resp.status `shouldMatchInt` 200 | ||
-- Adding users to an up backend conversation should not work when one of | ||
-- the participating backends is down. This is due to not being able to | ||
-- check non-fully connected graph between all participating backends | ||
otherUser3 <- randomUser OtherDomain def | ||
connectUsers delUser otherUser3 | ||
bindResponse (addMembers delUser upBackendConv [otherUser3]) $ \resp -> | ||
resp.status `shouldMatchInt` 533 | ||
|
||
-- Adding users from down backend to a conversation should also fail | ||
bindResponse (addMembers delUser upBackendConv [downUser2]) $ \resp -> | ||
|
@@ -86,14 +88,17 @@ testNotificationsForOfflineBackends = do | |
|
||
-- User deletions should eventually make it to the other backend. | ||
deleteUser delUser | ||
|
||
let isOtherUser2LeaveUpConvNotif = allPreds [isConvLeaveNotif, isNotifConv upBackendConv, isNotifForUser otherUser2] | ||
isDelUserLeaveUpConvNotif = allPreds [isConvLeaveNotif, isNotifConv upBackendConv, isNotifForUser delUser] | ||
|
||
do | ||
newMsgNotif <- awaitNotification otherUser otherClient noValue 1 isNewMessageNotif | ||
newMsgNotif %. "payload.0.qualified_conversation" `shouldMatch` objQidObject upBackendConv | ||
newMsgNotif %. "payload.0.data.text" `shouldMatchBase64` "success message for other user" | ||
|
||
memberJoinNotif <- awaitNotification otherUser otherClient (Just newMsgNotif) 1 isMemberJoinNotif | ||
memberJoinNotif %. "payload.0.qualified_conversation" `shouldMatch` objQidObject upBackendConv | ||
asListOf objQidObject (memberJoinNotif %. "payload.0.data.users") `shouldMatch` mapM objQidObject [otherUser2] | ||
void $ awaitNotification otherUser otherClient (Just newMsgNotif) 1 isOtherUser2LeaveUpConvNotif | ||
void $ awaitNotification otherUser otherClient (Just newMsgNotif) 1 isDelUserLeaveUpConvNotif | ||
|
||
delUserDeletedNotif <- nPayload $ awaitNotification otherUser otherClient (Just newMsgNotif) 1 isDeleteUserNotif | ||
objQid delUserDeletedNotif `shouldMatch` objQid delUser | ||
|
@@ -103,23 +108,17 @@ testNotificationsForOfflineBackends = do | |
newMsgNotif %. "payload.0.qualified_conversation" `shouldMatch` objQidObject upBackendConv | ||
newMsgNotif %. "payload.0.data.text" `shouldMatchBase64` "success message for down user" | ||
|
||
-- FUTUREWORK: Uncomment after fixing this bug: https://wearezeta.atlassian.net/browse/WPB-3664 | ||
-- memberJoinNotif <- awaitNotification downUser1 downClient1 (Just newMsgNotif) 1 isMemberJoinNotif | ||
-- memberJoinNotif %. "payload.0.qualified_conversation" `shouldMatch` objQidObject upBackendConv | ||
-- asListOf objQidObject (memberJoinNotif %. "payload.0.data.users") `shouldMatch` mapM objQidObject [downUser2] | ||
|
||
let isDelUserLeaveDownConvNotif = | ||
allPreds | ||
[ isConvLeaveNotif, | ||
isNotifConv downBackendConv, | ||
isNotifForUser delUser | ||
] | ||
void $ awaitNotification downUser1 downClient1 (Just newMsgNotif) 1 isDelUserLeaveDownConvNotif | ||
void $ awaitNotification otherUser otherClient noValue 1 isDelUserLeaveDownConvNotif | ||
|
||
-- FUTUREWORK: Uncomment after fixing this bug: https://wearezeta.atlassian.net/browse/WPB-3664 | ||
-- void $ awaitNotification downUser1 downClient1 (Just newMsgNotif) 1 (allPreds [isConvLeaveNotif, isNotifConv upBackendConv, isNotifForUser otherUser]) | ||
-- void $ awaitNotification downUser1 downClient1 (Just newMsgNotif) 1 (allPreds [isConvLeaveNotif, isNotifConv upBackendConv, isNotifForUser delUser]) | ||
-- void $ awaitNotification downUser1 downClient1 (Just newMsgNotif) 1 isOtherUser2LeaveUpConvNotif | ||
-- void $ awaitNotification otherUser otherClient (Just newMsgNotif) 1 isDelUserLeaveDownConvNotif | ||
Comment on lines
+120
to
+121
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Are you planning to uncomment or remove these two lines? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, see the comment above them. These don't work due to a bug: https://wearezeta.atlassian.net/browse/WPB-3664 |
||
|
||
delUserDeletedNotif <- nPayload $ awaitNotification downUser1 downClient1 (Just newMsgNotif) 1 isDeleteUserNotif | ||
objQid delUserDeletedNotif `shouldMatch` objQid delUser | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please explain how this dance of deleting and then immediately making a connection works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have two tests: one of them connects
dynBackend1
with all backends and second one only connectsdynBackend
1 withOwnDomain
. if the first test runs first, the second test will always fail. So, the second test must always delete all the connectionsdynBackend1
has and then only create a connection withOwnDomain
to ensure that it is not affected by the first test.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
Wasn't the plan to provide more isolation between test executions? This dance within the test is very fragile, it incurs what I hope can be called a needless brain load, and is not a general solution.
Is there something we can do in the (dynamic) backend setup that would make them isolated from each other (even if we go with e.g. randomly assigning identifiers (names) and such, which should be good enough if they are very unlikely to collide)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's why I added the FUTUREWORK. While acquiring an environment we could simply truncate the table which keeps this information around.