-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
galley: Send on-user-deleted-conversations backend notification through RabbitMQ #3333
Conversation
fdbe21f
to
893cb06
Compare
@@ -0,0 +1,62 @@ | |||
{-# LANGUAGE NumericUnderscores #-} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is in the cabal file now, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! just nit-picks.
@@ -23,6 +23,11 @@ config: | |||
host: aws-cassandra | |||
replicaCount: 3 | |||
enableFederation: false # keep enableFederation default in sync with brig and cargohold chart's config.enableFederation as well as wire-server chart's tags.federation | |||
# Not used if enableFederation is false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Not used if enableFederation is false | |
# It is an error to set this if enableFederation is false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not an error to set the helm value, it just gets ignored. If we want to make it an error it would require us to remove this from here and put it in some docs. Which would make it less discoverable.
leaveRemoteConversations cids = do | ||
leaveRemoteConversations cids = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is creating a lot of noise, we're adding and removing redundant do
s at about the same frequency.
Right udcnD <- pure . eitherDecode . frBody $ dReq | ||
sort (fromRange (F.udcvConversations udcnD)) @?= sort [convD1] | ||
F.udcvUser udcnD @?= qUnqualified alexDel | ||
assertEqual ("expect exactly 4 federated requests in : " <> show fedRequests) 4 (length fedRequests) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, the federated calls don't happen here any more, so we can't test for these events. but should/can we test for something else?
related: a half-solution to the MakesFederatedCall
tolerance for removing calls without removing the constraint could be that we add another kind of constraint PushesFederatedNotifications
. then we would get a type error if we forget to add the new constraint, and that would help us remember to remove the old one. not sure that's worth it, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, the federated calls don't happen here any more, so we can't test for these events. but should/can we test for something else?
This functionality is already tested in federation end-to-end tests. If we really want to do something we'd have to create a rabbitmq vhost (its like a namespace) and pass that to our test and then we can assert that there is something in the queue. But honestly, seems a bit redundant given there are other test. I am making sure that I don't delete tests without making sure that there are other tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
related: a half-solution to the MakesFederatedCall tolerance for removing calls without removing the constraint could be that we add another kind of constraint PushesFederatedNotifications. then we would get a type error if we forget to add the new constraint, and that would help us remember to remove the old one. not sure that's worth it, though.
I thought the point behind adding MakesFederatedCall
was to ensure that possibility of federation errors shows up in swagger. If we fail to connect to RabbitMQ, it is just a 500 because the backend is not well. Do you think it would help swagger somehow that we add this new annotation? Do the clients need to care that this gets send via RabbitMQ?
@@ -104,3 +108,15 @@ currentFanoutLimit o = do | |||
let optFanoutLimit = fromIntegral . fromRange $ fromMaybe defFanoutLimit (o ^. (optSettings . setMaxFanoutSize)) | |||
let maxTeamSize = fromIntegral (o ^. (optSettings . setMaxTeamSize)) | |||
unsafeRange (min maxTeamSize optFanoutLimit) | |||
|
|||
mkRabbitMqChannel :: Logger -> Opts -> IO (Maybe (MVar Q.Channel)) | |||
mkRabbitMqChannel l (view optRabbitmq -> Just RabbitMqOpts {..}) = do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this go to wire-api? that would also pull a few other changes there, and clean up the arguments (Opts
is not available in wire-api, but RabbitMqOpts
should be).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think wire-api
is the wrong place to put it as this has nothing to do with the API. I was thinking of shoving it in the exteded
package, but forgot about it. Let's do it in the next PR?
Right <$> recovering policy handlers (const $ go ownDomain chanVar) | ||
where | ||
logError willRetry (SomeException e) status = do | ||
Log.err $ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something that may well happen every now and then without being an error. eg., if only one backend-pusher is running and needs to be restarted, or during startup. Should the log level be info
, and only error
once the retry count has been exhausted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can only happen when RabbitMQ is down or unreachable. So, this is definitely an error.
https://wearezeta.atlassian.net/browse/WPB-200
Also includes some fixups for #3276 (all brig related changes are fixups)
Checklist
changelog.d