-
Notifications
You must be signed in to change notification settings - Fork 866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Put job back on the queue #221
Comments
Can you let me know why you're wanting to put it back in the queue? Is it due to a failure of a job? If so, you set the number of maximum atttempts. |
there is throughput limit for the job so sometimes want to delay handling until later |
The sample code below should run at-most two jobs simultaneously. If you change maxJobs to 1, you'll see a delay in when they are removed from the queue. var kue = require('../index.js'),
jobs = kue.createQueue(),
job1, job2,
maxJobs;
maxJobs = 2;
jobs.process('test-job', maxJobs, function(job, done) {
console.log(job.data.id);
setTimeout(function() {
done();
}, 1500);
});
job1 = jobs.create('test-job', { id : 1 }).save()
job2 = jobs.create('test-job', { id : 2 }).save() Net/net, you want to avoid popping and reinserting a job if it's possible to not pop it in the first place. |
Sure, currently am processing 20 jobs concurrently.
My particular use case is for a rate limited API. When API for a particular job is exhausted then was wanting to add back to queue and process another job. |
Gotcha…even if you were able to put back on, in theory it could get processed right away. Could create some nasty circles. It also doesn't stop other jobs from being processed. In theory, you'd have to requeue the item and introduce delays on all related jobs…seems messy. Could you keep track of "time until available" for your API calls? i.e. you know what your limit is. Before creating a job, you would call "getDelay", which would return a time depending on how many jobs you'd called recently. You would keep updating this number with a setInterval. Using this method, you'd do a lot less queueing/requeuing/editing existing jobs. Rather, you'd simply call use the "delay" option currently in existence. On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:
|
Yes have been adding back with a delay. Unfortunately not practical to schedule the job times for each API ahead of I was planning to modify kue so if done("requeue") is called then kue On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl notifications@github.comwrote:
|
Not that I know of and honestly that doesn't seem like a good solution at all. It's going to put you into an infinite loop and hog all of your processing power. If you do this, you'll need to halt the process. If this existed, you'd choose a preset number of failure attempts and auto-restart once the process started back up again. Brandon Carl On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:
|
no, does not create an infinite loop or hog processing power. I modified On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl notifications@github.comwrote:
|
Yes, it can, in the event that your load is too high. Job processed > job requeued > job processed > job requeued – what's to stop this? Basically, jobs keep getting popped and pushed back on over and over again. You won't have a problem with only a few jobs. But assume you eat up your API call in first 5 seconds of your interval. You will have an infinite loop. On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:
|
Here is sample code demonstrating an infinite loop: var kue = require('../index.js'),
jobs = kue.createQueue(),
limitReached = true;
setTimeout(function() {
limitReached = false;
}, 5000);
jobs.process('test-job', function(job, done) {
console.log('Processing job');
if (!limitReached)
done();
else {
// Simulates re-queueing a job
jobs.create('test-job', { idx : 1 }).save();
done();
}
});
jobs.create('test-job', { idx : 1 }).save(); FYI, this seems to be a repeat of #182 and #133 from what I can tell. |
a delay is used when popping back on. If there is a better approach would love to hear it. On Wed, Jul 17, 2013 at 8:50 PM, brandoncarl notifications@github.comwrote:
|
The problem is that nothing stops further jobs from being popped and running into the same fate. As I said earlier, you would ideally have a "halt processing/resume processing feature set" as discussed in other referenced issues. Brandon Carl On Thursday, July 18, 2013 at 6:41 PM, Richard wrote:
|
those further jobs could also be delayed ... On Thu, Jul 18, 2013 at 8:40 PM, brandoncarl notifications@github.comwrote:
|
Honestly – you keep asking for alternative solutions, and all you seem to want to do is to argue them! This basically causes you to recycle every related job in your queue through the same fate. If there are a lot of them, you'll bump up the processing speed and bottleneck your worker. I can't think of many great solutions to this, but a much better one has to do with halting the process. On Friday, July 19, 2013 at 5:30 PM, Richard wrote:
|
Yeah seems there is no ideal solution - thanks for feedback. If requeuing On Fri, Jul 19, 2013 at 9:32 PM, brandoncarl notifications@github.comwrote:
|
Hey @richardpenman, having a very similar issue. I know its been a while, but can you let me know how you ended up handling this? Thanks! |
I needed to modify Kue so that a call to done("requeue") would add it back to the queue after a delay. Not ideal. |
See #553 for another attempt at this. cc/ @andrewtamura |
When processing the queue how can a job be put back on the queue?
One possibility would be increasing the attempts and return an error:
Is there a proper way?
The text was updated successfully, but these errors were encountered: