Releases: riverqueue/river
v0.14.3
Changed
- Dropped internal random generators in favor of
math/rand/v2
, which will have the effect of making code fully incompatible with Go 1.21 (go.mod
has specified a minimum of 1.22 for some time already though). PR #691.
Fixed
- 006 migration now tolerates previous existence of a
unique_states
column in case it was added separately so that the new index could be raised withCONCURRENTLY
. PR #690.
v0.14.2
Fixed
- Cancellation of running jobs relied on a channel that was only being received when in the job fetch routine, meaning that jobs which were cancelled would not be cancelled until the next scheduled fetch. This was fixed by also receiving from the job cancellation channel when in the main producer loop, even if no fetches are happening. PR #678.
- Job insert middleware were not being utilized for periodic jobs. This insertion path has been refactored to rely on the unified insertion path from the client. Fixes #675. PR #679.
v0.14.1
Fixed
- In PR #663 the client was changed to be more aggressive about re-fetching when it had previously fetched a full batch. Unfortunately a clause was missed, which resulted in the client being more aggressive any time even a single job was fetched on the previous attempt. This was corrected with a conditional to ensure it only happens when the last fetch was full. PR #668.
v0.14.0
Added
- Expose
JobCancelError
andJobSnoozeError
types to more easily facilitate testing. PR #665.
Changed
- Tune the client to be more aggressive about fetching when it just fetched a full batch of jobs, or when it skipped its previous triggered fetch because it was already full. This should bring more consistent throughput to poll-only mode and in cases where there is a backlog of existing jobs but new ones aren't being actively inserted. This will result in increased fetch load on many installations, with the benefit of increased throughput. As before,
FetchCooldown
still limits how frequently these fetches can occur on each client and can be increased to reduce the amount of fetch querying. Thanks Chris Gaffney (@gaffneyc) for the idea, initial implementation, and benchmarks. PR #663.
Fixed
riverpgxv5
driver:Hijack()
the underlying listener connection as soon as it is acquired from thepgxpool.Pool
in order to prevent the pool from automatically closing it after it reaches its max age. A max lifetime makes sense in the context of a pool with many conns, but a long-lived listener does not need a max lifetime as long as it can ensure the conn remains healthy. PR #661.
v0.13.0
Added
-
A middleware system was added for job insertion and execution, providing the ability to extract shared functionality across workers. Both
JobInsertMiddleware
andWorkerMiddleware
can be configured globally on theClient
, andWorkerMiddleware
can also be added on a per-worker basis using the newMiddleware
method onWorker[T]
. Middleware can be useful for logging, telemetry, or for building higher level abstractions on top of base River functionality.Despite the interface expansion, users should not encounter any breakage if they're embedding the
WorkerDefaults
type in their workers as recommended. PR #632.
Changed
- Breaking change: The advisory lock unique jobs implementation which was deprecated in v0.12.0 has been removed. Users of that feature should first upgrade to v0.12.1 to ensure they don't see any warning logs about using the deprecated advisory lock uniqueness. The new, faster unique implementation will be used automatically as long as the
UniqueOpts.ByState
list hasn't been customized to remove required states (pending
,scheduled
,available
, andrunning
). As of this release, customizingByState
without these required states returns an error. PR #614. - Single job inserts are now unified under the hood to use the
InsertMany
bulk insert query. This should not be noticeable to users, and the unified code path will make it easier to build new features going forward. PR #614.
Fixed
- Allow
river.JobCancel
to accept anil
error as input without panicking. PR #634.
v0.13.0-rc.1
Added
-
A middleware system was added for job insertion and execution, providing the ability to extract shared functionality across workers. Both
JobInsertMiddleware
andWorkerMiddleware
can be configured globally on theClient
, andWorkerMiddleware
can also be added on a per-worker basis using the newMiddleware
method onWorker[T]
. Middleware can be useful for logging, telemetry, or for building higher level abstractions on top of base River functionality.Despite the interface expansion, users should not encounter any breakage if they're embedding the
WorkerDefaults
type in their workers as recommended. PR #632.
Changed
- Breaking change: The advisory lock unique jobs implementation which was deprecated in v0.12.0 has been removed. Users of that feature should first upgrade to v0.12.1 to ensure they don't see any warning logs about using the deprecated advisory lock uniqueness. The new, faster unique implementation will be used automatically as long as the
UniqueOpts.ByState
list hasn't been customized to remove required states (pending
,scheduled
,available
, andrunning
). As of this release, customizingByState
without these required states returns an error. PR #614. - Single job inserts are now unified under the hood to use the
InsertMany
bulk insert query. This should not be noticeable to users, and the unified code path will make it easier to build new features going forward. PR #614.
Fixed
- Allow
river.JobCancel
to accept anil
error as input without panicking. PR #634.
v0.12.1
Changed
- The
BatchCompleter
that marks jobs as completed can now batch database updates for all states of jobs that have finished execution. Prior to this change, onlycompleted
jobs were batched into a singleUPDATE
call, while jobs moving to any other state used a singleUPDATE
per job. This change should significantly reduce database and pool contention on high volume system when jobs get retried, snoozed, cancelled, or discarded following execution. PR #617.
Fixed
- Unique job changes from v0.12.0 / PR #590 introduced a bug with scheduled or retryable unique jobs where they could be considered in conflict with themselves and moved to
discarded
by mistake. There was also a possibility of a broken job scheduler if duplicateretryable
unique jobs were attempted to be scheduled at the same time. The job scheduling query was corrected to address these issues along with missing test coverage. PR #619.
v0.12.0
go install github.com/riverqueue/river/cmd/river@latest
river migrate-up --database-url "$DATABASE_URL"
If not using River's internal migration system, the raw SQL can alternatively be dumped with:
go install github.com/riverqueue/river/cmd/river@latest
river migrate-get --version 6 --up > river6.up.sql
river migrate-get --version 6 --down > river6.down.sql
The migration includes a new index. Users with a very large job table may want to consider raising the index separately using CONCURRENTLY
(which must be run outside of a transaction), then run river migrate-up
to finalize the process (it will tolerate an index that already exists):
ALTER TABLE river_job ADD COLUMN unique_states BIT(8);
CREATE UNIQUE INDEX CONCURRENTLY river_job_unique_idx ON river_job (unique_key)
WHERE unique_key IS NOT NULL
AND unique_states IS NOT NULL
AND river_job_state_in_bitmask(unique_states, state);
go install github.com/riverqueue/river/cmd/river@latest
river migrate-up --database-url "$DATABASE_URL"
Added
rivertest.WorkContext
, a test function that can be used to initialize a context to test aJobArgs.Work
implementation that will have a client set to context for use withriver.ClientFromContext
. PR #526.- A new
river migrate-list
command is available which lists available migrations and which version a target database is migrated to. PR #534. river version
orriver --version
now prints River version information. PR #537.Config.JobCleanerTimeout
was added to allow configuration of the job cleaner query timeout. In some deployments with millions of stale jobs, the cleaner may not be able to complete its query within the default 30 seconds.
Changed
InsertMany
and one in rivermigrate
. As before, we try never to make breaking changes, but these ones were deemed worth it because of minimal impact and to help avoid panics.
-
Breaking change:
Client.InsertMany
/InsertManyTx
now return the inserted rows rather than merely returning a count of the inserted rows. The new implementations no longer use Postgres'COPY FROM
protocol in order to facilitate return values.Users who relied on the return count can merely wrap the returned rows in a
len()
to return to that behavior, or you can continue using the old APIs using their new namesInsertManyFast
andInsertManyFastTx
. PR #589. -
Breaking change:
rivermigrate.New
now returns a possible error along with a migrator. An error may be returned, for example, when a migration line is configured that doesn't exist. PR #558.# before migrator := rivermigrate.New(riverpgxv5.New(dbPool), nil) # after migrator, err := rivermigrate.New(riverpgxv5.New(dbPool), nil) if err != nil { // handle error }
-
Unique jobs have been improved to allow bulk insertion of unique jobs via
InsertMany
/InsertManyTx
, and to allow customizing theByState
list to add or remove certain states. This enables users to expand the set of unique states to also includecancelled
anddiscarded
jobs, or to removeretryable
from uniqueness consideration. This updated implementation maintains the speed advantage of the newer index-backed uniqueness system, while allowing some flexibility in which job states.Unique jobs utilizing
ByArgs
can now also opt to have a subset of the job's arguments considered for uniqueness. For example, you could choose to consider only thecustomer_id
field while ignoring thetrace_id
field:type MyJobArgs { CustomerID string `json:"customer_id" river:"unique` TraceID string `json:"trace_id"` }
Any fields considered in uniqueness are also sorted alphabetically in order to guarantee a consistent result, even if the encoded JSON isn't sorted consistently. For example
encoding/json
encodes struct fields in their defined order, so merely reordering struct fields would previously have been enough to cause a new job to not be considered identical to a pre-existing one with different JSON order.The
UniqueOpts
type also gains anExcludeKind
option for cases where uniqueness needs to be guaranteed across multiple job types.In-flight unique jobs using the previous designs will continue to be executed successfully with these changes, so there should be no need for downtime as part of the migration. However the v6 migration adds a new unique job index while also removing the old one, so users with in-flight unique jobs may also wish to avoid removing the old index until the new River release has been deployed in order to guarantee that jobs aren't duplicated by old River code once that index is removed.
Deprecated: The original unique jobs implementation which relied on advisory locks has been deprecated, but not yet removed. The only way to trigger this old code path is with a single insert (
Insert
/InsertTx
) and usingUniqueOpts.ByState
with a custom list of states that omits some of the now-required states for unique jobs. Specifically,pending
,scheduled
,available
, andrunning
can not be removed from theByState
list with the new implementation. These are included in the default list so only the places which customize this attribute need to be updated to opt into the new (much faster) unique jobs. The advisory lock unique implementation will be removed in an upcoming release, and until then emits warning level logs when it's used. -
Deprecated: The
MigrateTx
method ofrivermigrate
has been deprecated. It turns out there are certain combinations of schema changes which cannot be run within a single transaction, and the migrator now prefers to run each migration in its own transaction, one-at-a-time.MigrateTx
will be removed in future version. -
The migrator now produces a better error in case of a non-existent migration line including suggestions for known migration lines that are similar in name to the invalid one. PR #558.
Fixed
- Fixed a panic that'd occur if
StopAndCancel
was invoked before a client was started. PR #557. - A
PeriodicJobConstructor
should be able to returnnil
JobArgs
if it wishes to not have any job inserted. However, this was either never working or was broken at some point. It's now fixed. Thanks @semanser! PR #572. - Fixed a nil pointer exception if
Client.Subscribe
was called when the client had no configured workers (it still, panics with a more instructive error message now). PR #599.
v0.12.0-rc.1
go install github.com/riverqueue/river/cmd/river@latest
river migrate-up --database-url "$DATABASE_URL"
If not using River's internal migration system, the raw SQL can alternatively be dumped with:
go install github.com/riverqueue/river/cmd/river@latest
river migrate-get --version 6 --up > river6.up.sql
river migrate-get --version 6 --down > river6.down.sql
The migration includes a new index. Users with a very large job table may want to consider raising the index separately using CONCURRENTLY
(which must be run outside of a transaction), then run river migrate-up
to finalize the process (it will tolerate an index that already exists):
ALTER TABLE river_job ADD COLUMN unique_states BIT(8);
CREATE UNIQUE INDEX CONCURRENTLY river_job_unique_idx ON river_job (unique_key)
WHERE unique_key IS NOT NULL
AND unique_states IS NOT NULL
AND river_job_state_in_bitmask(unique_states, state);
go install github.com/riverqueue/river/cmd/river@latest
river migrate-up --database-url "$DATABASE_URL"
Added
rivertest.WorkContext
, a test function that can be used to initialize a context to test aJobArgs.Work
implementation that will have a client set to context for use withriver.ClientFromContext
. PR #526.- A new
river migrate-list
command is available which lists available migrations and which version a target database is migrated to. PR #534. river version
orriver --version
now prints River version information. PR #537.Config.JobCleanerTimeout
was added to allow configuration of the job cleaner query timeout. In some deployments with millions of stale jobs, the cleaner may not be able to complete its query within the default 30 seconds.
Changed
InsertMany
and one in rivermigrate
. As before, we try never to make breaking changes, but these ones were deemed worth it because of minimal impact and to help avoid panics.
-
Breaking change:
Client.InsertMany
/InsertManyTx
now return the inserted rows rather than merely returning a count of the inserted rows. The new implementations no longer use Postgres'COPY FROM
protocol in order to facilitate return values.Users who relied on the return count can merely wrap the returned rows in a
len()
to return to that behavior, or you can continue using the old APIs using their new namesInsertManyFast
andInsertManyFastTx
. PR #589. -
Breaking change:
rivermigrate.New
now returns a possible error along with a migrator. An error may be returned, for example, when a migration line is configured that doesn't exist. PR #558.# before migrator := rivermigrate.New(riverpgxv5.New(dbPool), nil) # after migrator, err := rivermigrate.New(riverpgxv5.New(dbPool), nil) if err != nil { // handle error }
-
Unique jobs have been improved to allow bulk insertion of unique jobs via
InsertMany
/InsertManyTx
, and to allow customizing theByState
list to add or remove certain states. This enables users to expand the set of unique states to also includecancelled
anddiscarded
jobs, or to removeretryable
from uniqueness consideration. This updated implementation maintains the speed advantage of the newer index-backed uniqueness system, while allowing some flexibility in which job states.Unique jobs utilizing
ByArgs
can now also opt to have a subset of the job's arguments considered for uniqueness. For example, you could choose to consider only thecustomer_id
field while ignoring thetrace_id
field:type MyJobArgs { CustomerID string `json:"customer_id" river:"unique` TraceID string `json:"trace_id"` }
Any fields considered in uniqueness are also sorted alphabetically in order to guarantee a consistent result, even if the encoded JSON isn't sorted consistently. For example
encoding/json
encodes struct fields in their defined order, so merely reordering struct fields would previously have been enough to cause a new job to not be considered identical to a pre-existing one with different JSON order.The
UniqueOpts
type also gains anExcludeKind
option for cases where uniqueness needs to be guaranteed across multiple job types.In-flight unique jobs using the previous designs will continue to be executed successfully with these changes, so there should be no need for downtime as part of the migration. However the v6 migration adds a new unique job index while also removing the old one, so users with in-flight unique jobs may also wish to avoid removing the old index until the new River release has been deployed in order to guarantee that jobs aren't duplicated by old River code once that index is removed.
Deprecated: The original unique jobs implementation which relied on advisory locks has been deprecated, but not yet removed. The only way to trigger this old code path is with a single insert (
Insert
/InsertTx
) and usingUniqueOpts.ByState
with a custom list of states that omits some of the now-required states for unique jobs. Specifically,pending
,scheduled
,available
, andrunning
can not be removed from theByState
list with the new implementation. These are included in the default list so only the places which customize this attribute need to be updated to opt into the new (much faster) unique jobs. The advisory lock unique implementation will be removed in an upcoming release. -
Deprecated: The
MigrateTx
method ofrivermigrate
has been deprecated. It turns out there are certain combinations of schema changes which cannot be run within a single transaction, and the migrator now prefers to run each migration in its own transaction, one-at-a-time.MigrateTx
will be removed in future version. -
The migrator now produces a better error in case of a non-existent migration line including suggestions for known migration lines that are similar in name to the invalid one. PR #558.
Fixed
- Fixed a panic that'd occur if
StopAndCancel
was invoked before a client was started. PR #557. - A
PeriodicJobConstructor
should be able to returnnil
JobArgs
if it wishes to not have any job inserted. However, this was either never working or was broken at some point. It's now fixed. Thanks @semanser! PR #572. - Fixed a nil pointer exception if
Client.Subscribe
was called when the client had no configured workers (it still, panics with a more instructive error message now). PR #599.