-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add metrics Enricher API #1271
Add metrics Enricher API #1271
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1271 +/- ##
======================================
Coverage 78.0% 78.1%
======================================
Files 123 124 +1
Lines 6252 6269 +17
======================================
+ Hits 4879 4897 +18
Misses 1125 1125
+ Partials 248 247 -1
|
sdk/metric/sdk.go
Outdated
// OnMetricRecorded is execute everytime a metric is recorded by | ||
// the sync instrument implementation of an Accumulator, it generally | ||
// provides ability to correlate the context with the metrics | ||
OnMetricRecorded(context.Context, *[]label.KeyValue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this method be too specific?
…nto add-metrics-processor
sdk/metric/sdk.go
Outdated
@@ -291,6 +308,9 @@ func (s *syncInstrument) Bind(kvs []label.KeyValue) api.BoundSyncImpl { | |||
} | |||
|
|||
func (s *syncInstrument) RecordOne(ctx context.Context, number api.Number, kvs []label.KeyValue) { | |||
for _, processor := range s.getMetricsProcessors() { | |||
processor.OnMetricRecorded(ctx, &kvs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that the processors are only invoked for synchronous instruments, do they not have any value for async instruments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Aneurysm9 TBH we haven't started using the async instrument yet, but when implementing this feature I did looked it. Firstly I didn't find a proper place to add this hook then when I explore the code I reckon it is unnecessary to do that since for async instrument the client can provide a callback which already gives ability to achive the same purpose.
e.g. https://github.com/open-telemetry/opentelemetry-go/blob/master/example/basic/main.go#L71 you can re-compose the metric labels using the context
and commonLabels
in the callback function. (Please correct me if I'm wrong)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense to use this interface for asynchronous instruments, but the applications will be different. There's no distributed context available for this Enricher API, but you still might want to use this API to filter labels before they enter the Accumulator. Enricher is not a good name for a filter, so it feels like we still haven't found the right name for this concept.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmacd Have you had a chance to look open-telemetry/opentelemetry-specification#1116 (comment)
@Aneurysm9 Thanks for reviewing. I've addressed your comment, could you please take another look? |
@Aneurysm9 I was checking the conflicts and found this PR #1235 has removed the config and changed the structure of the code. Could you suggest how should I structure my code according to this change? cc @jmacd since you made the change and we're in another conversation at open-telemetry/opentelemetry-specification#1116 Do you think I should add the config option back for the "AccumulatorProcessor" or pass it as another parameter in addition to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm hesitant to add this interface without a broader discussion at the specification level. In the past we have prototyped concepts here that were ultimately destine to become additions to the specification. Is that the intent of this PR @hstan?
As I mentioned in the description, this is part of the discussion on the specification linked below. This was intended to demo a go implementation to support the discussion but also we want it to be merged at some stage since we need this feature. Besides the discussion above @parallelstream also had a chat with @bogdandrutu on this and at high level. |
sdk/metric/sdk.go
Outdated
|
||
// Implementations of MetricsProcessor can be provided as an config option to provide an opportunity | ||
// to re-compose metrics labels based on the context when the metrics are recorded. | ||
MetricsProcessor interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MetricsProcessor
does not seem like an appropriate name for this interface. It is overly general, uses the word "metrics" as a noun instead of an adjective (something that is avoided in OpenTelemetry), and duplicates the name of the other metric.Processor
from the export package, which is is also a field of the Accumulator
.
Can we use a more descriptive name of the function this interface will serve and possibly match broader naming standards?
sdk/metric/sdk.go
Outdated
// OnMetricRecorded is execute everytime a metric is recorded by | ||
// the sync instrument implementation of an Accumulator, it generally | ||
// provides ability to correlate the context with the metrics | ||
OnMetricRecorded(context.Context, *[]label.KeyValue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing the added indirection here of a reference type for the second parameter means that argument will be mutated. This doesn't seem idiomatic of Go and it will likely add an additional allocation of the slice pointer and pose concurrency issues.
Is there any reason this could not accept a slice and return a slice?
Additionally, how are errors expected to be handled here?
sdk/metric/sdk.go
Outdated
|
||
// Implementations of MetricsProcessor can be provided as an config option to provide an opportunity | ||
// to re-compose metrics labels based on the context when the metrics are recorded. | ||
MetricsProcessor interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be an interface? It seems like a declared type of func
would be sufficient here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right
|
||
// MetricsLabelsEnricher can be provided as a config option to enrich metrics labels based on | ||
// the context when the metrics are recorded | ||
MetricsLabelsEnricher func(context.Context, []label.KeyValue) ([]label.KeyValue, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MrAlias Does this make sense?
Sorry for the delay. I'll try to review it before tomorrow's SIG meetings. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great. My only reservation is still about terminology. In recent Metrics SIGs we've briefly discussed using the name "Processor" for the interface here, since it's closer to what the Span Processor API does. That leaves us hunting for a new name for the existing metrics Processor component in the SDK specification. Earlier names for that Processor: "Batcher", and "Integrator". We should discuss this in the SIG tomorrow.
package metric | ||
|
||
// Config contains configuration for an SDK. | ||
type Config struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I recently removed a similar struct from this package, arguing that Accumulators are not generally built from options, but from one of the controllers. That's why Resource is an explicit argument, and I would be just as happy if this were an third argument to the function, as opposed to a new option. This would be the first optional setting, and maybe we can introduce a Config struct an Option interface when there are more than one.
FWIW, I'm planning to consolidate the push and pull controllers, which would simplify this PR. See open-telemetry/opentelemetry-specification#1207.
|
||
// MetricsLabelsEnricher is a function that enriches metrics labels based | ||
// on kvs stored in context when metrics are recorded. | ||
MetricsLabelsEnricher metric.MetricsLabelsEnricher |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Repeat: open-telemetry/opentelemetry-specification#1207 says we can and should consolidate push and pull controllers, to avoid this repetition.)
@@ -305,11 +321,17 @@ func (s *syncInstrument) RecordOne(ctx context.Context, number api.Number, kvs [ | |||
// processor will call Collect() when it receives a request to scrape | |||
// current metric values. A push-based processor should configure its | |||
// own periodic collection. | |||
func NewAccumulator(processor export.Processor, resource *resource.Resource) *Accumulator { | |||
func NewAccumulator(processor export.Processor, resource *resource.Resource, opts ...Option) *Accumulator { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comment above. I'd be happier to leave this a three-argument function until we have more than one optional setting.
…nto add-metrics-processor
@@ -291,6 +298,15 @@ func (s *syncInstrument) Bind(kvs []label.KeyValue) api.BoundSyncImpl { | |||
} | |||
|
|||
func (s *syncInstrument) RecordOne(ctx context.Context, number api.Number, kvs []label.KeyValue) { | |||
if s.meter.metricsLabelsEnricher != nil { | |||
var err error | |||
kvs, err = s.meter.metricsLabelsEnricher(ctx, kvs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good for the unbound instrument code path. 👍
For the bound code path, in:
func (r *record) RecordOne(ctx context.Context, num number.Number)
If the enricher is present, take a new code path, where you'll call
s.meter.metricsLabelsEnricher(ctx, record.labels.ToSlice())
to get the enriched labels and then divert to the unbound instrument code path, since the *record
that was bound is not applicable after enrichment of its labels.
@bogdandrutu and @open-telemetry/specs-metrics-approvers do you think this is an API-level or an SDK-level concept? |
this sounds like SDK to me. |
I will continue this PR after #1378 gets merged |
closing this PR and starting a new branch to implement this based on #1421 |
opened new PR #1480 |
This PR implements a way to correlate context(e.g. span context) with metrics by introducing a MetricsProcessor interface which can be implemented by the client and provided when creating a new push controller.
Usecase:
Assuming we have a requirement to associate our spans with metrics by storing the span id as metrics label, first we create an implementation of the
MetricsProcessor
and provided it during the creation of the push controller.Then when recording a metric, we can provide a context which includes span information as well as other cross-cutting metadata
As the processor gets applied for all recorded metrics, no additional explicit label setting is required when recording a value.
Related discussion: open-telemetry/opentelemetry-specification#1116