Skip to content

What is the recommended size limit for jobs? #714

Answered by brandur
st3fan asked this question in Q&A
Discussion options

You must be logged in to vote

Yeah, I would've said that job args size probably doesn't matter that much — large args will be stored out of band in TOAST, which will keep the job row tuples themselves pretty lean and fast.

You should think about total table size though because at some point tables/databases do just become unwieldy if there's too much in them (e.g. recovery on failover and that sort of thing is slower). So if you're storing 1000 jobs at 32 kB that's around 32 MB. No problem. If you're storing 100,000,000 jobs at 32 kB that's more like 3.2 TB. Even that might work, but I do start to worry about Postgres databases once they enter the TB scale.

That link from Lukas on JSONB performance is good. It does se…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@brandur
Comment options

Answer selected by st3fan
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants