Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support autodelete flag on queue and exchange in ConventionalRoutingTopology #777

Open
mikelneo opened this issue Mar 24, 2021 · 6 comments

Comments

@mikelneo
Copy link

mikelneo commented Mar 24, 2021

Now all queues and exchanges in ConventionalRoutingTopology declares with autodelete: false flag;

https://github.com/Particular/NServiceBus.RabbitMQ/blob/master/src/NServiceBus.Transport.RabbitMQ/Routing/ConventionalRoutingTopology.cs#L130
https://github.com/Particular/NServiceBus.RabbitMQ/blob/master/src/NServiceBus.Transport.RabbitMQ/Routing/ConventionalRoutingTopology.cs#L190

In some of my scenarios I need to declare part of queues and exchanges with autodelete: true.
Is there a way to provide some virtual method like

protected virtual bool GetAutodeleteForAddress(string address) => false;

in ConventionalRoutingTopology, so I can override it in my custom Topology?

@bording
Copy link
Member

bording commented Mar 24, 2021

@mikelneo If you are creating a custom topology, then you have full control on how queues and exchanges are declared. I get the impression that you're attempting to derive from ConventionalRoutingTopology. If you're basically wanting ConventionalRoutingTopology but with some tweaks, I recommend copying the code and making the changes you want instead.

@mikelneo
Copy link
Author

@bording

I get the impression that you're attempting to derive from ConventionalRoutingTopology

Yes, it's exactly my case. I totally fine with topology itself, but need to customize settings for some queues..

I recommend copying the code and making the changes you want instead.

I know this option, but this way if some bug will be fixed in ConventionalRoutingTopology or some other changes will be done, I will need to manually support it in my copied code. I just want some flexibility in ConventionalRoutingTopology.

@bording
Copy link
Member

bording commented Mar 25, 2021

Auto delete queues and exchanges don't fit with the sort of guarantees we want to be able to provide with NServiceBus by default, so it's very unlikely that we'd make changes to enable them with our built-in topologies.

If you need them, you'll need to implement them in your own custom topology, which is why I suggested copying the code if you want the ConventionalRoutingTopology but with some tweaks.

@Dunge
Copy link

Dunge commented Jun 29, 2022

+1 for the feature request.

I'm moving my solution to Kubernetes and some services use the pod name as an instance unique identifier. So when NServiceBus create the queues/exchanges, it use that identifier to name them in order to have one queue per instance of the service (unlike a shared one). Problem is, Kubernetes pods are very ephemeral and can be destroyed and create new ones at any time. So what I end up with is tons and tons of queues and exchanges getting created and never deleted, messages filling them up and using all resources.

I don't care about the unhandled messages data inside these queues, they are also meant to be temporary. I can set a TTL on the messages, but I would also like to delete the related queues/exchanges when my service pod shut down.

I managed to add an "expires" policy to all my queues inside rabbitmq config, but it's hard to find a proper regex match so for now it also deletes the "error/audit/nsb.delay" queues and my other shared queues that needs to be durable so that's not good. It's also impossible to set such policy for exchanges, the auto-delete flag need to be set on creation which is done by NSB conventional routing installer.

Or maybe there's something inherently wrong about my design?

@bording
Copy link
Member

bording commented Jun 29, 2022

I'm moving my solution to Kubernetes and some services use the pod name as an instance unique identifier.

Kuberbetes pods are very ephemeral

We do have some guidance around choosing instance discriminators, and we recommend against using something ephemeral for this exact reason. For example, see the warning here.

Or maybe there's something inherently wrong about my design?

Without knowing more about what you're trying to do, it's hard to say.

However, it's unusual to need a lot of instance-specific queues that also have messages that you don't care about processing. That's usually a sign of doing something like data distribution, which we recommend you not use NServiceBus for.

@Dunge
Copy link

Dunge commented Jun 29, 2022

Thanks.

Yes I was aware of this "uniquely addressable" feature for callbacks. But my case is not for callbacks, but for the whole event handler.

A common example of a data distribution scenario is having cached data on multiple scaled-out web servers and attempting to deliver a message to each of them. The message indicates each server should drop their current cache entries and retrieve fresh data from the database.

Yup, that's one of the scenario that fits. An action is completed and I need a message published to all instances to reload the cache. It can also be to do something else than refresh data, like for example relay information to all connected users on that service instance. We do use redis for sharing data, but as I said it's not always just about sharing data, it's more about reacting to events. I'm aware redis also supports pub/sub, but the NServiceBus message serialization and the threading system in async handlers makes it so much easier, faster and cleaner. It would be a shame to delete all that and convert it to a more basic solution when our NSB/RabbitMQ and named instances system worked so well for years. Also note that we do also have others services that use a shared queue between all instances as the asynchronous messaging pattern is designed.

In any case, sorry for sidetracking from the original issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants