Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(batchprocessor): support partial consumption of batch entries #6203

Merged

Conversation

bisakhmondal
Copy link
Member

@bisakhmondal bisakhmondal commented Jan 25, 2022

Signed-off-by: Bisakh Mondal bisakhmondal00@gmail.com

What this PR does / why we need it:

Helpful where the batch processor entries can't be processed in one go (transactional) and an error occurs during the middle of the consumption. With this PR, the batch processor takes a hint from the consumer so that the already consumed entries don't get retried in the next run.

Use case: #6113 (comment), datadog plugin etc (where entries can't be processed as a bulk)

Pre-submission checklist:

  • Did you explain what problem does this PR solve? Or what new features have been added?
  • Have you added corresponding test cases?
  • Have you modified the corresponding document?
  • Is this PR backward compatible? If it is not backward compatible, please discuss on the mailing list first

@bisakhmondal bisakhmondal changed the title feat(batchprocessor): Support partial consumption of entries feat(batchprocessor): support partial consumption of batch entries Jan 25, 2022
@bisakhmondal bisakhmondal force-pushed the feat/batchprocessor/partialconsume branch from 52d90a7 to 82646a7 Compare January 26, 2022 06:44
end
-- request counter
ok, err = sock:send(format("%s:%s|%s%s", prefix ..
"request.counter", 1, "c", suffix))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's indent the line to (

local function slice_batch(batch, n)
local slice = {}
for i = n or 1, #batch, 1 do
slice[#slice+1] = batch[i]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use offset to avoid counting len?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see. I thought len is O(1) operation but it seems, during each invocation it gets recomputed at O(logn).
Nice nit picking 👍

function execute_func(premature, self, batch)
if premature then
return
end

local ok, err = self.func(batch.entries, self.batch_max_size)
--- In case of "err" and a valid "n" batch processor considers, all n-1 entries have been
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be first_fail better than n as the var name?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, defenitely

apisix/utils/batch-processor.lua Outdated Show resolved Hide resolved
@spacewander
Copy link
Member

Don't forget to mention it in the doc.

end
-- request counter
ok, err = sock:send(format("%s:%s|%s%s", prefix ..
"request.counter", 1, "c", suffix))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad! Actually, I mean the alignment like:

apisix/apisix/upstream.lua

Lines 274 to 275 in 615ee41

core.log.info("discover new upstream from ", up_conf.service_name, ", type ",
up_conf.discovery_type, ": ",

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@bisakhmondal bisakhmondal force-pushed the feat/batchprocessor/partialconsume branch from 7aee653 to 2fbe081 Compare January 28, 2022 05:38
@spacewander spacewander merged commit c210add into apache:master Jan 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants