Skip to content

Database backed asynchronous priority queue -- Extracted from Shopify

License

Notifications You must be signed in to change notification settings

kusor/delayed_job

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Delayed::Job

Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.

It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:

  • sending massive newsletters
  • image resizing
  • http downloads
  • updating smart collections
  • updating solr, our search server, after product changes
  • batch imports
  • spam checks

Setup

The library evolves around a delayed_jobs table which looks as follows:


  create_table :delayed_jobs, :force => true do |table|
    table.integer  :priority, :default => 0      # Allows some jobs to jump to the front of the queue
    table.integer  :attempts, :default => 0      # Provides for retries, but still fail eventually.
    table.text     :handler                      # YAML-encoded string of the object that will do work
    table.string   :last_error                   # reason for last failure (See Note below)
    table.datetime :run_at                       # When to run. Could be Time.now for immediately, or sometime in the future.
    table.datetime :locked_at                    # Set when a client is working on this object
    table.datetime :failed_at                    # Set when all retries have failed (actually, by default, the record is deleted instead)
    table.string   :locked_by                    # Who is working on this object (if locked)
    table.timestamps
  end

Delayed Job now supports both ActiveRecord and DataMapper as it’s ORM. If DataMapper is defined when the gem is loaded, it will use it instead of ActiveRecord. To setup your table with DataMapper, require delayed_job in your application and run `Delayed::Job.auto_migrate!`.

On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.

The default MAX_ATTEMPTS is 25. After this, the job either deleted (default), or left in the database with “failed_at” set.
With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.

The default MAX_RUN_TIME is 4.hours. If your job takes longer than that, another computer could pick it up. It’s up to you to
make sure your job doesn’t exceed this time. You should set this to the longest time you think the job could take.

By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.

Here is an example of changing job parameters in Rails:


  # config/initializers/delayed_job_config.rb
  Delayed::Job.destroy_failed_jobs = false
  silence_warnings do
    Delayed::Job.const_set("MAX_ATTEMPTS", 3)
    Delayed::Job.const_set("MAX_RUN_TIME", 5.minutes)
  end

Note: If your error messages are long, consider changing last_error field to a :text instead of a :string (255 character limit).

Usage

Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table.

Job objects are serialized to yaml so that they can later be resurrected by the job runner.


  class NewsletterJob < Struct.new(:text, :emails)
    def perform
      emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
    end    
  end  
  
  Delayed::Job.enqueue NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email))
           

There is also a second way to get jobs in the queue: send_later.


  BatchImporter.new(Shop.find(1)).send_later(:import_massive_csv, massive_csv)                                                      

or, for DataMapper:


  BatchImporter.new(Shop.get(1)).send_later(:import_massive_csv, massive_csv)                                                      

This will simply create a Delayed::PerformableMethod job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects
which are stored as their text representation and loaded from the database fresh when the job is actually run later.

Running the jobs

You can invoke rake jobs:work which will start working off jobs. You can cancel the rake task with CTRL-C.

You can also run by writing a simple script/job_runner, and invoking it externally:


  #!/usr/bin/env ruby
  require File.dirname(__FILE__) + '/../config/environment'
  
  Delayed::Worker.new.start  

Workers can be running on any computer, as long as they have access to the database and their clock is in sync. You can even run multiple workers on per computer, but you must give each one a unique name. (TODO: put in an example)
Keep in mind that each worker will check the database at least every 5 seconds.

Note: The rake task will exit if the database has any network connectivity problems.

Cleaning up

You can invoke rake jobs:clear to delete all jobs in the queue.

Running specs

Running rake spec will run all of the specs using ActiveRecord. To test the application using DataMapper run `rake spec DM=true`.

Changes

  • 1.8.0: Added a DataMapper adapter which is loaded instead of the ActiveRecord adapter if DataMapper is defined.
  • 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
  • 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
  • 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
  • 1.2.0: Added #send_later to Object for simpler job creation
  • 1.0.0: Initial release

About

Database backed asynchronous priority queue -- Extracted from Shopify

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Ruby 100.0%