- Fixed an issue where
worker.refresh()
may fail whenbirth_date
is not set. Thanks @vanife!
- Fixed an issue where
worker.refresh()
may fail when upgrading from previous versions of RQ.
Worker
statistics!Worker
now keeps track oflast_heartbeat
,successful_job_count
,failed_job_count
andtotal_working_time
. Thanks @selwin!Worker
now sends heartbeat during suspension check. Thanks @theodesp!- Added
queue.delete()
method to deleteQueue
objects entirely from Redis. Thanks @theodesp! - More robust exception string decoding. Thanks @stylight!
- Added
--logging-level
option to command line scripts. Thanks @jiajunhuang! - Added millisecond precision to job timestamps. Thanks @samuelcolvin!
- Python 2.6 is no longer supported. Thanks @samuelcolvin!
- Fixed an issue where
job.save()
may fail with unpickleable return value.
- Replace
job.id
withJob
instance in local_job_stack
. Thanks @katichev! job.save()
no longer implicitly callsjob.cleanup()
. Thanks @katichev!- Properly catch
StopRequested
worker.heartbeat()
. Thanks @fate0! - You can now pass in timeout in days. Thanks @yaniv-g!
- The core logic of sending job to
FailedQueue
has been moved torq.handlers.move_to_failed_queue
. Thanks @yaniv-g! - RQ cli commands now accept
--path
parameter. Thanks @kirill and @sjtbham! - Make
job.dependency
slightly more efficient. Thanks @liangsijian! FailedQueue
now returns jobs with the correct class. Thanks @amjith!
- Refactored APIs to allow custom
Connection
,Job
,Worker
andQueue
classes via CLI. Thanks @jezdez! job.delete()
now properly cleans itself from job registries. Thanks @selwin!Worker
should no longer overwritejob.meta
. Thanks @WeatherGod!job.save_meta()
can now be used to persist custom job data. Thanks @katichev!- Added Redis Sentinel support. Thanks @strawposter!
- Make
Worker.find_by_key()
more efficient. Thanks @selwin! - You can now specify job
timeout
using strings such asqueue.enqueue(foo, timeout='1m')
. Thanks @luojiebin! - Better unicode handling. Thanks @myme5261314 and @jaywink!
- Sentry should default to HTTP transport. Thanks @Atala!
- Improve
HerokuWorker
termination logic. Thanks @samuelcolvin!
- Fixes a bug that prevents fetching jobs from
FailedQueue
(#765). Thanks @jsurloppe! - Fixes race condition when enqueueing jobs with dependency (#742). Thanks @th3hamm0r!
- Skip a test that requires Linux signals on MacOS (#763). Thanks @jezdez!
enqueue_job
should use Redis pipeline when available (#761). Thanks mtdewulf!
- Better support for Heroku workers (#584, #715)
- Support for connecting using a custom connection class (#741)
- Fix: connection stack in default worker (#479, #641)
- Fix:
fetch_job
now checks that a job requested actually comes from the intended queue (#728, #733) - Fix: Properly raise exception if a job dependency does not exist (#747)
- Fix: Job status not updated when horse dies unexpectedly (#710)
- Fix:
request_force_stop_sigrtmin
failing for Python 3 (#727) - Fix
Job.cancel()
method on failed queue (#707) - Python 3.5 compatibility improvements (#729)
- Improved signal name lookup (#722)
- Jobs that depend on job with result_ttl == 0 are now properly enqueued.
cancel_job
now works properly. Thanks @jlopex!- Jobs that execute successfully now no longer tries to remove itself from queue. Thanks @amyangfei!
- Worker now properly logs Falsy return values. Thanks @liorsbg!
Worker.work()
now acceptslogging_level
argument. Thanks @jlopex!- Logging related fixes by @redbaron4 and @butla!
@job
decorator now acceptsttl
argument. Thanks @javimb!Worker.__init__
now acceptsqueue_class
keyword argument. Thanks @antoineleclair!Worker
now saves warm shutdown time. You can access this property fromworker.shutdown_requested_date
. Thanks @olingerc!- Synchronous queues now properly sets completed job status as finished. Thanks @ecarreras!
Worker
now correctly deletescurrent_job_id
after failed job execution. Thanks @olingerc!Job.create()
andqueue.enqueue_call()
now acceptsmeta
argument. Thanks @tornstrom!- Added
job.started_at
property. Thanks @samuelcolvin! - Cleaned up the implementation of
job.cancel()
andjob.delete()
. Thanks @glaslos! Worker.execute_job()
now exportsRQ_WORKER_ID
andRQ_JOB_ID
to OS environment variables. Thanks @mgk!rqinfo
now accepts--config
option. Thanks @kfrendrich!Worker
class now hasrequest_force_stop()
andrequest_stop()
methods that can be overridden by custom worker classes. Thanks @samuelcolvin!- Other minor fixes by @VicarEscaped, @kampfschlaefer, @ccurvey, @zfz, @antoineleclair, @orangain, @nicksnell, @SkyLothar, @ahxxm and @horida.
- Job results are now logged on
DEBUG
level. Thanks @tbaugis! - Modified
patch_connection
so Redis connection can be easily mocked - Customer exception handlers are now called if Redis connection is lost. Thanks @jlopex!
- Jobs can now depend on jobs in a different queue. Thanks @jlopex!
(August 25th, 2015)
- Add support for
--exception-handler
command line flag - Fix compatibility with click>=5.0
- Fix maximum recursion depth problem for very large queues that contain jobs that all fail
(July 8th, 2015)
- Fix compatibility with raven>=5.4.0
(June 3rd, 2015)
- Better API for instantiating Workers. Thanks @RyanMTB!
- Better support for unicode kwargs. Thanks @nealtodd and @brownstein!
- Workers now automatically cleans up job registries every hour
- Jobs in
FailedQueue
now have their statuses set properly enqueue_call()
no longer ignoresttl
. Thanks @mbodock!- Improved logging. Thanks @trevorprater!
(April 14th, 2015)
- Support SSL connection to Redis (requires redis-py>=2.10)
- Fix to prevent deep call stacks with large queues
(March 9th, 2015)
- Resolve performance issue when queues contain many jobs
- Restore the ability to specify connection params in config
- Record
birth_date
anddeath_date
on Worker - Add support for SSL URLs in Redis (and
REDIS_SSL
config option) - Fix encoding issues with non-ASCII characters in function arguments
- Fix Redis transaction management issue with job dependencies
(Jan 30th, 2015)
- RQ workers can now be paused and resumed using
rq suspend
andrq resume
commands. Thanks Jonathan Tushman! - Jobs that are being performed are now stored in
StartedJobRegistry
for monitoring purposes. This also prevents currently active jobs from being orphaned/lost in the case of hard shutdowns. - You can now monitor finished jobs by checking
FinishedJobRegistry
. Thanks Nic Cope for helping! - Jobs with unmet dependencies are now created with
deferred
as their status. You can monitor deferred jobs by checkingDeferredJobRegistry
. - It is now possible to enqueue a job at the beginning of queue using
queue.enqueue(func, at_front=True)
. Thanks Travis Johnson! - Command line scripts have all been refactored to use
click
. Thanks Lyon Zhang! - Added a new
SimpleWorker
that does not fork when executing jobs. Useful for testing purposes. Thanks Cal Leeming! - Added
--queue-class
and--job-class
arguments torqworker
script. Thanks David Bonner! - Many other minor bug fixes and enhancements.
(May 21st, 2014)
- Raise a warning when RQ workers are used with Sentry DSNs using asynchronous transports. Thanks Wei, Selwin & Toms!
(May 8th, 2014)
- Fix where rqworker broke on Python 2.6. Thanks, Marko!
(May 7th, 2014)
- Properly declare redis dependency.
- Fix a NameError regression that was introduced in 0.4.3.
(May 6th, 2014)
- Make job and queue classes overridable. Thanks, Marko!
- Don't require connection for @job decorator at definition time. Thanks, Sasha!
- Syntactic code cleanup.
(April 28th, 2014)
- Add missing depends_on kwarg to @job decorator. Thanks, Sasha!
(April 22nd, 2014)
- Fix bug where RQ 0.4 workers could not unpickle/process jobs from RQ < 0.4.
(April 22nd, 2014)
-
Emptying the failed queue from the command line is now as simple as running
rqinfo -X
orrqinfo --empty-failed-queue
. -
Job data is unpickled lazily. Thanks, Malthe!
-
Removed dependency on the
times
library. Thanks, Malthe! -
Job dependencies! Thanks, Selwin.
-
Custom worker classes, via the
--worker-class=path.to.MyClass
command line argument. Thanks, Selwin. -
Queue.all()
andrqinfo
now report empty queues, too. Thanks, Rob! -
Fixed a performance issue in
Queue.all()
when issued in large Redis DBs. Thanks, Rob! -
Birth and death dates are now stored as proper datetimes, not timestamps.
-
Ability to provide a custom job description (instead of using the default function invocation hint). Thanks, İbrahim.
-
Fix: temporary key for the compact queue is now randomly generated, which should avoid name clashes for concurrent compact actions.
-
Fix:
Queue.empty()
now correctly deletes job hashes from Redis.
(December 17th, 2013)
- Bug fix where the worker crashes on jobs that have their timeout explicitly removed. Thanks for reporting, @algrs.
(December 16th, 2013)
- Bug fix where a worker could time out before the job was done, removing it from any monitor overviews (#288).
(August 23th, 2013)
- Some more fixes in command line scripts for Python 3
(August 20th, 2013)
- Bug fix in setup.py
(August 20th, 2013)
-
Python 3 compatibility (Thanks, Alex!)
-
Minor bug fix where Sentry would break when func cannot be imported
(June 17th, 2013)
-
rqworker
andrqinfo
have a--url
argument to connect to a Redis url. -
rqworker
andrqinfo
have a--socket
option to connect to a Redis server through a Unix socket. -
rqworker
readsSENTRY_DSN
from the environment, unless specifically provided on the command line. -
Queue
has a new API that supports pagingget_jobs(3, 7)
, which will return at most 7 jobs, starting from the 3rd.
(February 26th, 2013)
- Fixed bug where workers would not execute builtin functions properly.
(February 18th, 2013)
-
Worker registrations now expire. This should prevent
rqinfo
from reporting about ghosted workers. (Thanks, @yaniv-aknin!) -
rqworker
will automatically clean up ghosted worker registrations from pre-0.3.6 runs. -
rqworker
grew a-q
flag, to be more silent (only warnings/errors are shown)
(February 6th, 2013)
-
ended_at
is now recorded for normally finished jobs, too. (Previously only for failed jobs.) -
Adds support for both
Redis
andStrictRedis
connection types -
Makes
StrictRedis
the default connection type if none is explicitly provided
(January 23rd, 2013)
- Restore compatibility with Python 2.6.
(January 18th, 2013)
-
Fix bug where work was lost due to silently ignored unpickle errors.
-
Jobs can now access the current
Job
instance from within. Relevant documentation here. -
Custom properties can be set by modifying the
job.meta
dict. Relevant documentation here. -
Custom properties can be set by modifying the
job.meta
dict. Relevant documentation here. -
rqworker
now has an optional--password
flag. -
Remove
logbook
dependency (in favor oflogging
)
(September 3rd, 2012)
-
Fixes broken
rqinfo
command. -
Improve compatibility with Python < 2.7.
(August 30th, 2012)
-
.enqueue()
now takes aresult_ttl
keyword argument that can be used to change the expiration time of results. -
Queue constructor now takes an optional
async=False
argument to bypass the worker (for testing purposes). -
Jobs now carry status information. To get job status information, like whether a job is queued, finished, or failed, use the property
status
, or one of the new boolean accessor propertiesis_queued
,is_finished
oris_failed
. -
Jobs return values are always stored explicitly, even if they have to explicit return value or return
None
(with given TTL of course). This makes it possible to distinguish between a job that explicitly returnedNone
and a job that isn't finished yet (seestatus
property). -
Custom exception handlers can now be configured in addition to, or to fully replace, moving failed jobs to the failed queue. Relevant documentation here and here.
-
rqworker
now supports passing in configuration files instead of the many command line options:rqworker -c settings
will sourcesettings.py
. -
rqworker
now supports one-flag setup to enable Sentry as its exception handler:rqworker --sentry-dsn="http://public:secret@example.com/1"
Alternatively, you can use a settings file and configureSENTRY_DSN = 'http://public:secret@example.com/1'
instead.
(August 5th, 2012)
-
Reliability improvements
- Warm shutdown now exits immediately when Ctrl+C is pressed and worker is idle
- Worker does not leak worker registrations anymore when stopped gracefully
-
.enqueue()
does not consume thetimeout
kwarg anymore. Instead, to pass RQ a timeout value while enqueueing a function, use the explicit invocation instead:```python q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30) ```
-
Add a
@job
decorator, which can be used to do Celery-style delayed invocations:```python from redis import StrictRedis from rq.decorators import job # Connect to Redis redis = StrictRedis() @job('high', timeout=10, connection=redis) def some_work(x, y): return x + y ```
Then, in another module, you can call
some_work
:```python from foo.bar import some_work some_work.delay(2, 3) ```
(August 1st, 2012)
- Fix bug where return values that couldn't be pickled crashed the worker
(July 20th, 2012)
- Fix important bug where result data wasn't restored from Redis correctly (affected non-string results only).
(July 18th, 2012)
q.enqueue()
accepts instance methods now, too. Objects will be pickle'd along with the instance method, so beware.q.enqueue()
accepts string specification of functions now, too. Example:q.enqueue("my.math.lib.fibonacci", 5)
. Useful if the worker and the submitter of work don't share code bases.- Job can be assigned custom attrs and they will be pickle'd along with the rest of the job's attrs. Can be used when writing RQ extensions.
- Workers can now accept explicit connections, like Queues.
- Various bug fixes.
(May 15, 2012)
- Fix broken PyPI deployment.
(May 14, 2012)
- Thread-safety by using context locals
- Register scripts as console_scripts, for better portability
- Various bugfixes.
(March 28, 2012)
- Initially released version.