-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporterqueue] Default Batcher that reads from the queue, batches and exports #11507
Closed
sfc-gh-sili
wants to merge
1
commit into
open-telemetry:main
from
sfc-gh-sili:sili-queue-batcher-unit
Closed
[exporterqueue] Default Batcher that reads from the queue, batches and exports #11507
sfc-gh-sili
wants to merge
1
commit into
open-telemetry:main
from
sfc-gh-sili:sili-queue-batcher-unit
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sfc-gh-sili
changed the title
Batcher
[exporterqueue] Queue Batcher that reads from the queue, batches and exports
Oct 22, 2024
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #11507 +/- ##
==========================================
- Coverage 91.41% 91.27% -0.14%
==========================================
Files 433 435 +2
Lines 23657 23918 +261
==========================================
+ Hits 21625 21831 +206
- Misses 1658 1701 +43
- Partials 374 386 +12 ☔ View full report in Codecov by Sentry. |
dmitryax
pushed a commit
that referenced
this pull request
Oct 26, 2024
) #### Description This PR is a bare minimum implementation of a component called queue batcher. On completion, this component will replace `consumers` in `queue_sender`, and thus moving queue-batch from a pulling model instead of pushing model. Limitations of the current code * This implements only the case where batching is disabled, which means no merge of splitting of requests + no timeout flushing. * This implementation does not enforce an upper bound on concurrency All these code paths are marked as panic currently, and they will be replaced with actual implementation in coming PRs. This PR is split from #11507 for easier review. Design doc: https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing #### Link to tracking issue #8122 #10368
sfc-gh-sili
changed the title
[exporterqueue] Queue Batcher that reads from the queue, batches and exports
[exporterqueue] Default Batcher that reads from the queue, batches and exports
Oct 29, 2024
djaglowski
pushed a commit
to djaglowski/opentelemetry-collector
that referenced
this pull request
Nov 21, 2024
…n-telemetry#11532) #### Description This PR is a bare minimum implementation of a component called queue batcher. On completion, this component will replace `consumers` in `queue_sender`, and thus moving queue-batch from a pulling model instead of pushing model. Limitations of the current code * This implements only the case where batching is disabled, which means no merge of splitting of requests + no timeout flushing. * This implementation does not enforce an upper bound on concurrency All these code paths are marked as panic currently, and they will be replaced with actual implementation in coming PRs. This PR is split from open-telemetry#11507 for easier review. Design doc: https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing #### Link to tracking issue open-telemetry#8122 open-telemetry#10368
HongChenTW
pushed a commit
to HongChenTW/opentelemetry-collector
that referenced
this pull request
Dec 19, 2024
…n-telemetry#11532) #### Description This PR is a bare minimum implementation of a component called queue batcher. On completion, this component will replace `consumers` in `queue_sender`, and thus moving queue-batch from a pulling model instead of pushing model. Limitations of the current code * This implements only the case where batching is disabled, which means no merge of splitting of requests + no timeout flushing. * This implementation does not enforce an upper bound on concurrency All these code paths are marked as panic currently, and they will be replaced with actual implementation in coming PRs. This PR is split from open-telemetry#11507 for easier review. Design doc: https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing #### Link to tracking issue open-telemetry#8122 open-telemetry#10368
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR implements a new component that pulls from
queue_sender
. This component will replaceconsumers
inqueue_sender
The idea is that instead of allocating a group of reading goroutine and block them until the corresponding batch gets flushed, we now use a goroutine to read and then use the same goroutine go flush while allocating new goroutine to read.
Design doc:
https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing
Link to tracking issue
#8122
#10368
Testing
Documentation