Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fire & forget mode – publish message without waiting for response #17

Closed
cjnqt opened this issue Mar 10, 2016 · 9 comments
Closed

Fire & forget mode – publish message without waiting for response #17

cjnqt opened this issue Mar 10, 2016 · 9 comments

Comments

@cjnqt
Copy link

cjnqt commented Mar 10, 2016

The plugin follows the RPC-pattern, which is great.

But for some long-running tasks, I would rather just publish a new message ( seneca.act... ) and not wait around for the reply.

Is this possible? Can I close the calling process or do I have to disable the response queue or something?

@cjnqt cjnqt changed the title Fire & forget mode – not waiting for response Fire & forget mode – publish message without waiting for response Mar 10, 2016
@nfantone
Copy link
Collaborator

@cjnqt No, not currently, since that is not a feature in Seneca itself (AFAIK). But it would be nice to have it and it certainly could be done without much hassle.

Adding this as a feature request.

@nfantone
Copy link
Collaborator

Easiest current alternative would be to call the callback immediately on the .add side and then proceed with the long running task.

@cjnqt
Copy link
Author

cjnqt commented Mar 10, 2016

Thanks, I'll call the callback early on the .add side as you suggested!

@nfantone
Copy link
Collaborator

Let me know how that goes for you.

@nfantone
Copy link
Collaborator

@cjnqt I've found this:

https://github.com/senecajs/seneca-fire-and-forget

I haven't tried it, but it seems to be what you are after. seneca-amqp-transport would still create the reply queue, though.

@cjnqt
Copy link
Author

cjnqt commented Mar 10, 2016

Interesting, will check it out!

My use case is that I have a central task manager service that listens to "create new task"-commands and is responsible for starting off different tasks (which are implemented as seneca services). Some tasks takes long time and some are quick. The task manager is heavily used so I don't want to block it more than necessary.

If the task manager starts a long-running task and waits for it to finish, it will be blocked from receiving new "create new tasks"-commands during that time.

By letting tasks immediately call their callback, the task manager won't have to wait for the task to finish. This is better, but still dependent on the task worker actually receiving the message and calling its callback (if it doesn't the task manager will be blocked until the request is timed out).

Optimally, I'd like to be able to just wait for an ACK from the message queue (so the scheduling task knows that the message is in the queue...), then be free to do other work.
This way the system is more resilient – if a task brakes down or has a bug that prevents the callback from being fired, the task manager isn't blocked until timeout.

@cjnqt
Copy link
Author

cjnqt commented Mar 18, 2016

Thinking about it, I think this is an important topic.

The benefit of message queues – the thing that makes them more than just another transport protocol – is that they allow us to send a message without requiring someone on the other side to be available this very second.
Someone can pick up the message seconds later, that is fine, the message is stored in the queue. This allows true decoupling of services: One microservice is busy? No problem, we just enqueue messages for it in the message queue.

Right now, we don't really use the queueing feature, we require the producer (the .act call) to wait for a response from the receiver. Having the receiver call the done() callback as early as possible doesn't really solve it – Another process can send a message while the receiver is doing its work, then it won't be there to call done() early...

@nfantone
Copy link
Collaborator

@cjnqt You make some good points. And while I agree with your general perspective, I feel like commenting over it.

The benefit of message queues [...] is that they allow us to send a message without requiring someone on the other side to be available this very second.

That is one of the (main) benefits. But certainly not the only one, and not the only way to benefit from AMQP.

Right now, we don't really use the queueing feature, we require the producer (the .act call) to wait for a response from the receiver.

The thing is you are not actually required to use a queuing protocol. You shouldn't even be aware of that. You have chosen to use Seneca. This things shouldn't even be a concern - to the point of having the ability to swap your entire transport layer, replace it for another protocol and still maintain functionality.

Of course, that comes with a price. Seneca works in a RPC fashion and all transports implement that. If your needs go beyond RPC, things could be adapted a bit (like that seneca-fire-and-forget plugin), but if your core revolves around another scheme, perhaps you should consider not using Seneca at all. Or use it just in the bits that make sense. There are many other solutions that would fit your scope better (using amqplib manually would be one).

Having the receiver call the done() callback as early as possible doesn't really solve it – Another process can send a message while the receiver is doing its work, then it won't be there to call done() early...

Two things here:

  1. Isn't the receiver the one calling the callback and exiting immediately? What other "work" is there to be done? It should not become a bottleneck for you. Scratch that. I misunderstood you. You could try using a cluster of workers to work around that. But your best bet would be option number two below.

  2. From what you are expressing, seems like you have only one consumer on your queue. Just spawn another listener process and have that handle the new requests. Or spawn two more. Or three. Or a hundred. That's the beauty (as long as you have the RAM, that is).


Having said all that, I still believe supporting some kind of work queue mechanism should be in order, as I've said before. It is, after all, right there in our little roadmap.

@cjnqt
Copy link
Author

cjnqt commented Mar 18, 2016

Thanks for the comments!

Seneca works in a RPC fashion and all transports implement that. If your needs go beyond RPC, things could be adapted a bit (like that seneca-fire-and-forget plugin), but if your core revolves around another scheme, perhaps you should consider not using Seneca

Hmm, you are right. Seneca is a (beautiful) tool for working in a RPC fashion, if need something else I should probably use another tool (a regular message queue)...

Have to ponder this a bit more. As you pointed out, spawning more listener processes is a way to go forward.

@cjnqt cjnqt closed this as completed Mar 18, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants