Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change OC receiver behavior according to host ingestion status #192

Conversation

pjanotti
Copy link
Contributor

Make OC receiver respond to host ingestion status. This is part of #76

This closes #171, and closes #169.

Make OC receiver respond to host ingestion status.
})
}

// TODO: Create proper observability some all metric receivers generate
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #193 to track this and just noticed a typo here :)

for _, md := range mds {
// TODO: add observability for errors here.
// TODO: consider queued-retry for metrics too.
ocr.nextConsumer.ConsumeMetricsData(ctx, *md)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The usage of the bundler here doesn't seem correct, follow up #194

@codecov-io
Copy link

Codecov Report

Merging #192 into master will increase coverage by 0.18%.
The diff coverage is 73.94%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #192      +/-   ##
==========================================
+ Coverage   72.74%   72.93%   +0.18%     
==========================================
  Files         104      104              
  Lines        6094     6118      +24     
==========================================
+ Hits         4433     4462      +29     
+ Misses       1435     1429       -6     
- Partials      226      227       +1
Impacted Files Coverage Δ
receiver/opencensusreceiver/config.go 80.85% <100%> (+2.8%) ⬆️
receiver/opencensusreceiver/octrace/options.go 100% <100%> (+100%) ⬆️
receiver/zipkinreceiver/trace_receiver.go 72.08% <100%> (-0.58%) ⬇️
receiver/opencensusreceiver/ocmetrics/options.go 100% <100%> (+50%) ⬆️
observability/observability.go 63.15% <100%> (+13.15%) ⬆️
receiver/opencensusreceiver/octrace/opencensus.go 65.07% <57.89%> (-6.16%) ⬇️
...eceiver/opencensusreceiver/ocmetrics/opencensus.go 70.12% <62.5%> (-8.45%) ⬇️
receiver/opencensusreceiver/opencensus.go 86.06% <83.33%> (+1.51%) ⬆️
receiver/opencensusreceiver/options.go 100% <0%> (+40%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9233b6c...c79eaf9. Read the comment docs.

}
for _, opt := range opts {
opt(ocr)
}

// Setup and startup worker pool
workers := make([]*receiverWorker, 0, ocr.numWorkers)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are workers not needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are not really needed: see #169


type receiverWorker struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you removed workers. Was it supposed to be in this PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was intentional since I was already touching this code and this reduces code. Notice that concurrency them becomes more "controlled" via the pipeline using and configuring processors like batcher and queued-retry.

errChan <- ocr.httpServer().Serve(httpL)
}()
errChan <- m.Serve()
ocr.host.ReportFatalError(ocr.serverGRPC.Serve(grpcL))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it needed to check if Serve() returns nil?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question.

In practice these are all blocking calls that will only return when the server is closed (due to errors or explicit stop/close). Prior to a shutdown we want to be notified if they terminate (even if returning nil) since it means that the receiver in question is not running anymore - that said it seems a server implementation error if they return nil without corresponding stop/close call (in such case the log message won't be helpful at all).

In a shutdown triggered via ReportFatalError, ie.: "serve" call returning, only the first one is reported in the logs. For a shutdown via SIGTERM no errors reported via ReportFatalError are logged. Notice that it is implementation dependent if the "Serve" will return nil or an error in case the corresponding stop/close is called: e.g.: http always returns error while grpc can return nil if stop/close is called.

ocr.host.ReportFatalError(ocr.serverGRPC.Serve(grpcL))
}()
go func() {
ocr.host.ReportFatalError(ocr.httpServer().Serve(httpL))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

ocr.host.ReportFatalError(ocr.httpServer().Serve(httpL))
}()
go func() {
ocr.host.ReportFatalError(m.Serve())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same.

@pjanotti
Copy link
Contributor Author

We are not going to move ahead with this (further discussion and design related to #76). Closing this PR.

@pjanotti pjanotti closed this Jul 29, 2019
@pjanotti pjanotti deleted the oc-receiver-ingestion-response branch March 8, 2021 20:14
MovieStoreGuy pushed a commit to atlassian-forks/opentelemetry-collector that referenced this pull request Nov 11, 2021
…open-telemetry#1660)

* Remove `WithRecord()` option from SpanConfig options

This brings the trace API into conformance with the specification.

* Add entry to CHANGELOG

Fixes open-telemetry#192

* Updated CHANGELOG with PR#

* Cleaned up CHANGELOG notes

* fixup! Merge remote-tracking branch 'upstream/main' into remove-with-record

* Use new spanContext API to set traceflags, tracestate

Co-authored-by: Tyler Yahn <MrAlias@users.noreply.github.com>
hughesjj pushed a commit to hughesjj/opentelemetry-collector that referenced this pull request Apr 27, 2023
* Add Traces support to SA Receiver

* only check SFx span nil tags once
Troels51 pushed a commit to Troels51/opentelemetry-collector that referenced this pull request Jul 5, 2024
swiatekm pushed a commit to swiatekm/opentelemetry-collector that referenced this pull request Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor OC receiver use host.ReportFatalError Remove workers from OC trace receiver
3 participants