-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(batchprocessor): support partial consumption of batch entries #6203
feat(batchprocessor): support partial consumption of batch entries #6203
Conversation
52d90a7
to
82646a7
Compare
apisix/plugins/datadog.lua
Outdated
end | ||
-- request counter | ||
ok, err = sock:send(format("%s:%s|%s%s", prefix .. | ||
"request.counter", 1, "c", suffix)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's indent the line to (
apisix/utils/batch-processor.lua
Outdated
local function slice_batch(batch, n) | ||
local slice = {} | ||
for i = n or 1, #batch, 1 do | ||
slice[#slice+1] = batch[i] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use offset to avoid counting len?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see. I thought len is O(1) operation but it seems, during each invocation it gets recomputed at O(logn).
Nice nit picking 👍
apisix/utils/batch-processor.lua
Outdated
function execute_func(premature, self, batch) | ||
if premature then | ||
return | ||
end | ||
|
||
local ok, err = self.func(batch.entries, self.batch_max_size) | ||
--- In case of "err" and a valid "n" batch processor considers, all n-1 entries have been |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be first_fail
better than n
as the var name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, defenitely
Don't forget to mention it in the doc. |
apisix/plugins/datadog.lua
Outdated
end | ||
-- request counter | ||
ok, err = sock:send(format("%s:%s|%s%s", prefix .. | ||
"request.counter", 1, "c", suffix)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad! Actually, I mean the alignment like:
Lines 274 to 275 in 615ee41
core.log.info("discover new upstream from ", up_conf.service_name, ", type ", | |
up_conf.discovery_type, ": ", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
Signed-off-by: Bisakh Mondal <bisakhmondal00@gmail.com>
7aee653
to
2fbe081
Compare
Signed-off-by: Bisakh Mondal bisakhmondal00@gmail.com
What this PR does / why we need it:
Helpful where the batch processor entries can't be processed in one go (transactional) and an error occurs during the middle of the consumption. With this PR, the batch processor takes a hint from the consumer so that the already consumed entries don't get retried in the next run.
Use case: #6113 (comment), datadog plugin etc (where entries can't be processed as a bulk)
Pre-submission checklist: