Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Put job back on the queue #221

Closed
richardbaronpenman opened this issue Jul 17, 2013 · 18 comments
Closed

Put job back on the queue #221

richardbaronpenman opened this issue Jul 17, 2013 · 18 comments
Labels

Comments

@richardbaronpenman
Copy link

When processing the queue how can a job be put back on the queue?

One possibility would be increasing the attempts and return an error:

job._max_attempts = parseInt(job._max_attempts) + 1;
job.save(function() {
    done("handle later");
});

Or perhaps just remove the job and create again:
job.remove(function() {
    jobs.create(...).save(function() {
        done();
    });
})

Is there a proper way?

@aventuralabs
Copy link

Can you let me know why you're wanting to put it back in the queue? Is it due to a failure of a job? If so, you set the number of maximum atttempts.

@richardbaronpenman
Copy link
Author

there is throughput limit for the job so sometimes want to delay handling until later

@aventuralabs
Copy link

jobs.process by default only processes a single job concurrently (you can specify more). Let's say your job handler was processing 5 jobs, and maxed out. When a job completes, that should free up room to process an additional job, no?

The sample code below should run at-most two jobs simultaneously. If you change maxJobs to 1, you'll see a delay in when they are removed from the queue.

var kue = require('../index.js'),
    jobs = kue.createQueue(),
    job1, job2,
    maxJobs;

maxJobs = 2;

jobs.process('test-job', maxJobs, function(job, done) {
  console.log(job.data.id);
  setTimeout(function() {
    done();
  }, 1500);
});

job1 = jobs.create('test-job', { id : 1 }).save()
job2 = jobs.create('test-job', { id : 2 }).save()

Net/net, you want to avoid popping and reinserting a job if it's possible to not pop it in the first place.

@richardbaronpenman
Copy link
Author

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an additional job, no?

My particular use case is for a rate limited API. When API for a particular job is exhausted then was wanting to add back to queue and process another job.

@aventuralabs
Copy link

Gotcha…even if you were able to put back on, in theory it could get processed right away. Could create some nasty circles. It also doesn't stop other jobs from being processed. In theory, you'd have to requeue the item and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls? i.e. you know what your limit is. Before creating a job, you would call "getDelay", which would return a time depending on how many jobs you'd called recently. You would keep updating this number with a setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing existing jobs. Rather, you'd simply call use the "delay" option currently in existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an additional job, no?
My particular use case is for a rate limited API. When API for a particular job is exhausted then was wanting to add back to queue and process another job.


Reply to this email directly or view it on GitHub (#221 (comment)).

@richardbaronpenman
Copy link
Author

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each API ahead of
time, in part because depends on response times from API calls which are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called then kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl notifications@github.comwrote:

Gotcha…even if you were able to put back on, in theory it could get
processed right away. Could create some nasty circles. It also doesn't stop
other jobs from being processed. In theory, you'd have to requeue the item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls? i.e.
you know what your limit is. Before creating a job, you would call
"getDelay", which would return a time depending on how many jobs you'd
called recently. You would keep updating this number with a setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing existing
jobs. Rather, you'd simply call use the "delay" option currently in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an
additional job, no?
My particular use case is for a rate limited API. When API for a
particular job is exhausted then was wanting to add back to queue and
process another job.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21152687
.

@aventuralabs
Copy link

Not that I know of and honestly that doesn't seem like a good solution at all. It's going to put you into an infinite loop and hog all of your processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts and auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each API ahead of
time, in part because depends on response times from API calls which are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called then kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl notifications@github.comwrote:

Gotcha…even if you were able to put back on, in theory it could get
processed right away. Could create some nasty circles. It also doesn't stop
other jobs from being processed. In theory, you'd have to requeue the item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls? i.e.
you know what your limit is. Before creating a job, you would call
"getDelay", which would return a time depending on how many jobs you'd
called recently. You would keep updating this number with a setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing existing
jobs. Rather, you'd simply call use the "delay" option currently in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an
additional job, no?
My particular use case is for a rate limited API. When API for a
particular job is exhausted then was wanting to add back to queue and
process another job.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21152687
.


Reply to this email directly or view it on GitHub (#221 (comment)).

@richardbaronpenman
Copy link
Author

no, does not create an infinite loop or hog processing power. I modified
the function in kue to respond to this case, which works but would prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl notifications@github.comwrote:

Not that I know of and honestly that doesn't seem like a good solution at
all. It's going to put you into an infinite loop and hog all of your
processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each API ahead
of
time, in part because depends on response times from API calls which are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called then kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl notifications@github.comwrote:

Gotcha…even if you were able to put back on, in theory it could get
processed right away. Could create some nasty circles. It also doesn't
stop
other jobs from being processed. In theory, you'd have to requeue the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls?
i.e.
you know what your limit is. Before creating a job, you would call
"getDelay", which would return a time depending on how many jobs you'd
called recently. You would keep updating this number with a
setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option currently in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an
additional job, no?
My particular use case is for a rate limited API. When API for a
particular job is exhausted then was wanting to add back to queue and
process another job.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21153576
.

@aventuralabs
Copy link

Yes, it can, in the event that your load is too high.

Job processed > job requeued > job processed > job requeued – what's to stop this?

Basically, jobs keep getting popped and pushed back on over and over again. You won't have a problem with only a few jobs. But assume you eat up your API call in first 5 seconds of your interval. You will have an infinite loop.

On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:

no, does not create an infinite loop or hog processing power. I modified
the function in kue to respond to this case, which works but would prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl <notifications@github.com (mailto:notifications@github.com)>wrote:

Not that I know of and honestly that doesn't seem like a good solution at
all. It's going to put you into an infinite loop and hog all of your
processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each API ahead
of
time, in part because depends on response times from API calls which are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called then kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl <notifications@github.com (mailto:notifications@github.com)>wrote:

Gotcha…even if you were able to put back on, in theory it could get
processed right away. Could create some nasty circles. It also doesn't
stop
other jobs from being processed. In theory, you'd have to requeue the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls?
i.e.
you know what your limit is. Before creating a job, you would call
"getDelay", which would return a time depending on how many jobs you'd
called recently. You would keep updating this number with a
setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option currently in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an
additional job, no?
My particular use case is for a rate limited API. When API for a
particular job is exhausted then was wanting to add back to queue and
process another job.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21153576
.


Reply to this email directly or view it on GitHub (#221 (comment)).

@aventuralabs
Copy link

Here is sample code demonstrating an infinite loop:

var kue = require('../index.js'),
    jobs = kue.createQueue(),
    limitReached = true;

setTimeout(function() {
  limitReached = false;
}, 5000);

jobs.process('test-job', function(job, done) {

  console.log('Processing job');

  if (!limitReached)
    done();
  else {
    // Simulates re-queueing a  job
    jobs.create('test-job', { idx : 1 }).save();
    done();
  }
});

jobs.create('test-job', { idx : 1 }).save();

FYI, this seems to be a repeat of #182 and #133 from what I can tell.

@richardbaronpenman
Copy link
Author

a delay is used when popping back on.

If there is a better approach would love to hear it.

On Wed, Jul 17, 2013 at 8:50 PM, brandoncarl notifications@github.comwrote:

Yes, it can, in the event that your load is too high.

Job processed > job requeued > job processed > job requeued – what's to
stop this?

Basically, jobs keep getting popped and pushed back on over and over
again. You won't have a problem with only a few jobs. But assume you eat up
your API call in first 5 seconds of your interval. You will have an
infinite loop.

On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:

no, does not create an infinite loop or hog processing power. I modified
the function in kue to respond to this case, which works but would
prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl <notifications@github.com(mailto:
notifications@github.com)>wrote:

Not that I know of and honestly that doesn't seem like a good solution
at
all. It's going to put you into an infinite loop and hog all of your
processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each API
ahead
of
time, in part because depends on response times from API calls which
are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called then
kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)>wrote:

Gotcha…even if you were able to put back on, in theory it could
get
processed right away. Could create some nasty circles. It also
doesn't
stop
other jobs from being processed. In theory, you'd have to requeue
the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls?
i.e.
you know what your limit is. Before creating a job, you would call
"getDelay", which would return a time depending on how many jobs
you'd
called recently. You would keep updating this number with a
setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option currently
in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an
additional job, no?
My particular use case is for a rate limited API. When API for
a
particular job is exhausted then was wanting to add back to queue
and
process another job.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21153576>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21156039
.

@aventuralabs
Copy link

The problem is that nothing stops further jobs from being popped and running into the same fate. As I said earlier, you would ideally have a "halt processing/resume processing feature set" as discussed in other referenced issues.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Thursday, July 18, 2013 at 6:41 PM, Richard wrote:

a delay is used when popping back on.

If there is a better approach would love to hear it.

On Wed, Jul 17, 2013 at 8:50 PM, brandoncarl notifications@github.comwrote:

Yes, it can, in the event that your load is too high.

Job processed > job requeued > job processed > job requeued – what's to
stop this?

Basically, jobs keep getting popped and pushed back on over and over
again. You won't have a problem with only a few jobs. But assume you eat up
your API call in first 5 seconds of your interval. You will have an
infinite loop.

On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:

no, does not create an infinite loop or hog processing power. I modified
the function in kue to respond to this case, which works but would
prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl <notifications@github.com(mailto:
notifications@github.com)>wrote:

Not that I know of and honestly that doesn't seem like a good solution
at
all. It's going to put you into an infinite loop and hog all of your
processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each API
ahead
of
time, in part because depends on response times from API calls which
are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called then
kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)>wrote:

Gotcha…even if you were able to put back on, in theory it could
get
processed right away. Could create some nasty circles. It also
doesn't
stop
other jobs from being processed. In theory, you'd have to requeue
the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API calls?
i.e.
you know what your limit is. Before creating a job, you would call
"getDelay", which would return a time depending on how many jobs
you'd
called recently. You would keep updating this number with a
setInterval.

Using this method, you'd do a lot less queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option currently
in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process an
additional job, no?
My particular use case is for a rate limited API. When API for
a
particular job is exhausted then was wanting to add back to queue
and
process another job.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21153576>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21156039
.


Reply to this email directly or view it on GitHub (#221 (comment)).

@richardbaronpenman
Copy link
Author

those further jobs could also be delayed ...

On Thu, Jul 18, 2013 at 8:40 PM, brandoncarl notifications@github.comwrote:

The problem is that nothing stops further jobs from being popped and
running into the same fate. As I said earlier, you would ideally have a
"halt processing/resume processing feature set" as discussed in other
referenced issues.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Thursday, July 18, 2013 at 6:41 PM, Richard wrote:

a delay is used when popping back on.

If there is a better approach would love to hear it.

On Wed, Jul 17, 2013 at 8:50 PM, brandoncarl notifications@github.comwrote:

Yes, it can, in the event that your load is too high.

Job processed > job requeued > job processed > job requeued – what's
to
stop this?

Basically, jobs keep getting popped and pushed back on over and over
again. You won't have a problem with only a few jobs. But assume you
eat up
your API call in first 5 seconds of your interval. You will have an
infinite loop.

On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:

no, does not create an infinite loop or hog processing power. I
modified
the function in kue to respond to this case, which works but would
prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl <
notifications@github.com(mailto:
notifications@github.com)>wrote:

Not that I know of and honestly that doesn't seem like a good
solution
at
all. It's going to put you into an infinite loop and hog all of
your
processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts
and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each
API
ahead
of
time, in part because depends on response times from API calls
which
are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called
then
kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)>wrote:

Gotcha…even if you were able to put back on, in theory it
could
get
processed right away. Could create some nasty circles. It also
doesn't
stop
other jobs from being processed. In theory, you'd have to
requeue
the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API
calls?
i.e.
you know what your limit is. Before creating a job, you would
call
"getDelay", which would return a time depending on how many
jobs
you'd
called recently. You would keep updating this number with a
setInterval.

Using this method, you'd do a lot less
queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option
currently
in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process
an
additional job, no?
My particular use case is for a rate limited API. When API
for
a
particular job is exhausted then was wanting to add back to
queue
and
process another job.


Reply to this email directly or view it on GitHub (

#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>

.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21153576>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21156039>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21225019
.

@aventuralabs
Copy link

Honestly – you keep asking for alternative solutions, and all you seem to want to do is to argue them!

This basically causes you to recycle every related job in your queue through the same fate. If there are a lot of them, you'll bump up the processing speed and bottleneck your worker. I can't think of many great solutions to this, but a much better one has to do with halting the process.

On Friday, July 19, 2013 at 5:30 PM, Richard wrote:

those further jobs could also be delayed ...

On Thu, Jul 18, 2013 at 8:40 PM, brandoncarl <notifications@github.com (mailto:notifications@github.com)>wrote:

The problem is that nothing stops further jobs from being popped and
running into the same fate. As I said earlier, you would ideally have a
"halt processing/resume processing feature set" as discussed in other
referenced issues.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Thursday, July 18, 2013 at 6:41 PM, Richard wrote:

a delay is used when popping back on.

If there is a better approach would love to hear it.

On Wed, Jul 17, 2013 at 8:50 PM, brandoncarl <notifications@github.com (mailto:notifications@github.com)>wrote:

Yes, it can, in the event that your load is too high.

Job processed > job requeued > job processed > job requeued – what's
to
stop this?

Basically, jobs keep getting popped and pushed back on over and over
again. You won't have a problem with only a few jobs. But assume you
eat up
your API call in first 5 seconds of your interval. You will have an
infinite loop.

On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:

no, does not create an infinite loop or hog processing power. I
modified
the function in kue to respond to this case, which works but would
prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)(mailto:
notifications@github.com (mailto:notifications@github.com))>wrote:

Not that I know of and honestly that doesn't seem like a good
solution
at
all. It's going to put you into an infinite loop and hog all of
your
processing power. If you do this, you'll need to halt the process.

If this existed, you'd choose a preset number of failure attempts
and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for each
API
ahead
of
time, in part because depends on response times from API calls
which
are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called
then
kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)>wrote:

Gotcha…even if you were able to put back on, in theory it
could
get
processed right away. Could create some nasty circles. It also
doesn't
stop
other jobs from being processed. In theory, you'd have to
requeue
the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your API
calls?
i.e.
you know what your limit is. Before creating a job, you would
call
"getDelay", which would return a time depending on how many
jobs
you'd
called recently. You would keep updating this number with a
setInterval.

Using this method, you'd do a lot less
queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option
currently
in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to process
an
additional job, no?
My particular use case is for a rate limited API. When API
for
a
particular job is exhausted then was wanting to add back to
queue
and
process another job.


Reply to this email directly or view it on GitHub (

#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>

.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21153576>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21156039>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21225019
.


Reply to this email directly or view it on GitHub (#221 (comment)).

@richardbaronpenman
Copy link
Author

Yeah seems there is no ideal solution - thanks for feedback. If requeuing
becomes a bottleneck then will come back to this.

On Fri, Jul 19, 2013 at 9:32 PM, brandoncarl notifications@github.comwrote:

Honestly – you keep asking for alternative solutions, and all you seem to
want to do is to argue them!

This basically causes you to recycle every related job in your queue
through the same fate. If there are a lot of them, you'll bump up the
processing speed and bottleneck your worker. I can't think of many great
solutions to this, but a much better one has to do with halting the
process.

On Friday, July 19, 2013 at 5:30 PM, Richard wrote:

those further jobs could also be delayed ...

On Thu, Jul 18, 2013 at 8:40 PM, brandoncarl <notifications@github.com(mailto:
notifications@github.com)>wrote:

The problem is that nothing stops further jobs from being popped and
running into the same fate. As I said earlier, you would ideally have
a
"halt processing/resume processing feature set" as discussed in other
referenced issues.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Thursday, July 18, 2013 at 6:41 PM, Richard wrote:

a delay is used when popping back on.

If there is a better approach would love to hear it.

On Wed, Jul 17, 2013 at 8:50 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)>wrote:

Yes, it can, in the event that your load is too high.

Job processed > job requeued > job processed > job requeued –
what's
to
stop this?

Basically, jobs keep getting popped and pushed back on over and
over
again. You won't have a problem with only a few jobs. But assume
you
eat up
your API call in first 5 seconds of your interval. You will have
an
infinite loop.

On Wednesday, July 17, 2013 at 8:10 PM, Richard wrote:

no, does not create an infinite loop or hog processing power. I
modified
the function in kue to respond to this case, which works but
would
prefer
to avoid modifying the internals.

On Wed, Jul 17, 2013 at 7:42 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)(mailto:
notifications@github.com (mailto:notifications@github.com))>wrote:

Not that I know of and honestly that doesn't seem like a good
solution
at
all. It's going to put you into an infinite loop and hog all
of
your
processing power. If you do this, you'll need to halt the
process.

If this existed, you'd choose a preset number of failure
attempts
and
auto-restart once the process started back up again.

Brandon Carl
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)

On Wednesday, July 17, 2013 at 7:35 PM, Richard wrote:

Yes have been adding back with a delay.

Unfortunately not practical to schedule the job times for
each
API
ahead
of
time, in part because depends on response times from API
calls
which
are
not known ahead of time.

I was planning to modify kue so if done("requeue") is called
then
kue
internally adds the jobs back. Or is there a better way?

On Wed, Jul 17, 2013 at 7:21 PM, brandoncarl <
notifications@github.com (mailto:notifications@github.com)>wrote:

Gotcha…even if you were able to put back on, in theory it
could
get
processed right away. Could create some nasty circles. It
also
doesn't
stop
other jobs from being processed. In theory, you'd have to
requeue
the
item
and introduce delays on all related jobs…seems messy.

Could you keep track of "time until available" for your
API
calls?
i.e.
you know what your limit is. Before creating a job, you
would
call
"getDelay", which would return a time depending on how
many
jobs
you'd
called recently. You would keep updating this number with
a
setInterval.

Using this method, you'd do a lot less
queueing/requeuing/editing
existing
jobs. Rather, you'd simply call use the "delay" option
currently
in
existence.

On Wednesday, July 17, 2013 at 6:42 PM, Richard wrote:

Sure, currently am processing 20 jobs concurrently.

When a job completes, that should free up room to
process
an
additional job, no?
My particular use case is for a rate limited API. When
API
for
a
particular job is exhausted then was wanting to add back
to
queue
and
process another job.


Reply to this email directly or view it on GitHub (

#221 (comment)).


Reply to this email directly or view it on GitHub<

https://github.com/LearnBoost/kue/issues/221#issuecomment-21152687>

.


Reply to this email directly or view it on GitHub (

#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21153576>

.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21156039>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHub<
https://github.com/LearnBoost/kue/issues/221#issuecomment-21225019>
.


Reply to this email directly or view it on GitHub (
#221 (comment)).


Reply to this email directly or view it on GitHubhttps://github.com//issues/221#issuecomment-21278823
.

@behrad behrad closed this as completed Jan 21, 2014
@schnie
Copy link
Contributor

schnie commented Apr 21, 2015

Hey @richardpenman, having a very similar issue. I know its been a while, but can you let me know how you ended up handling this? Thanks!

@richardbaronpenman
Copy link
Author

I needed to modify Kue so that a call to done("requeue") would add it back to the queue after a delay. Not ideal.

@bradvogel
Copy link

See #553 for another attempt at this. cc/ @andrewtamura

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants