Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inherit instrumentation library name #1364

Merged
merged 10 commits into from
Jan 21, 2022
5 changes: 4 additions & 1 deletion processor/spanmetricsprocessor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,8 @@ The following settings can be optionally configured:
- `attach_span_and_trace_id` attaches span id and trace id as attributes on metrics generated from spans if set to `true`. If not provided,
will use default value of `false`.

- `inherit_instrumentation_library_name`: defines whether the metrics generated from spans should inherit the same instrumentation library name as the span. If not provided, will use default value of `false` which will define the instrumentation library name as `spanmetricsprocessor` on metrics.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we want to inherit the instrumentation library name of the span? Is it strictly necessary? Is it solely for the one attribute from the span being reflected on the metric?

In my view the instrumentation Library being spanmetricsprocessor is valid, because that's where the metric is generated. If someone really needs to know the original instrumentation library, they can look at the linked span.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue with setting everything as spanmetricsprocessor as the instrumentation library is that it can incorrectly aggregate. For example, in our use case, we have HTTP metrics sent from Service Proxy and HTTP metrics sent from the application. These will share the same attributes and as a result they'll all be aggregated together into one blob by the span metrics processor, hence incorrectly showing doubled up data. The only way to differentiate between these two is by checking the instrumentation library name.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, yeah it makes sense. Wouldn't they be aggregated in some other way though? Like I would think service.name would be different in the the case you mentioend

Copy link
Author

@Tenaria Tenaria Jan 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, cos the service.name for both would be the application (not Service Proxy). Otherwise, all the Service Proxy metrics for all services would collapse into one giant metric if they had the same service name (and even if we set up another variable to mark the actual service name, this would have issues in the pipeline since we now can't rate limit/throttle per service since everything the service would be set to SP). Using the instrumentation library name seemed like the in-built way in OTel to differentiate, instead of spinning up our own solution. (We had a conversation about this as a team some time last year, I'm not sure if you were there)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fair enough! sounds good


## Examples

The following is a simple example usage of the spanmetrics processor.
Expand Down Expand Up @@ -116,7 +118,8 @@ processors:
default: us-east-1
- name: host_id
resource_attributes_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_DELTA"
aggregation_temporality: "AGGREGATION_TEMPORALITY_DELTA"
inherit_instrumentation_library_name: true

exporters:
jaeger:
Expand Down
5 changes: 5 additions & 0 deletions processor/spanmetricsprocessor/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,11 @@ type Config struct {
// AttachSpanAndTraceID attaches span id and trace id to metrics generated from spans.
// The default value is set to `false`.
AttachSpanAndTraceID bool `mapstructure:"attach_span_and_trace_id"`

// InheritInstrumentationLibraryName defines whether metrics generated from spans should inherit
// the instrumentation library name from the span.
// Optional. The default value is `false` which will define the instrumentation library name on metrics as `spanmetricsprocessor`.
InheritInstrumentationLibraryName bool `mapstructure:"inherit_instrumentation_library_name"`
}

// GetAggregationTemporality converts the string value given in the config into a MetricAggregationTemporality.
Expand Down
71 changes: 38 additions & 33 deletions processor/spanmetricsprocessor/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,31 +38,34 @@ func TestLoadConfig(t *testing.T) {
defaultMethod := "GET"
defaultRegion := "us-east-1"
testcases := []struct {
configFile string
wantMetricsExporter string
wantLatencyHistogramBuckets []time.Duration
wantDimensions []Dimension
wantDimensionsCacheSize int
wantResourceAttributes []Dimension
wantResourceAttributesCacheSize int
wantAggregationTemporality string
wantAttachSpanAndTraceID bool
configFile string
wantMetricsExporter string
wantLatencyHistogramBuckets []time.Duration
wantDimensions []Dimension
wantDimensionsCacheSize int
wantResourceAttributes []Dimension
wantResourceAttributesCacheSize int
wantAggregationTemporality string
wantAttachSpanAndTraceID bool
wantInheritInstrumentationLibraryName bool
}{
{
configFile: "config-2-pipelines.yaml",
wantMetricsExporter: "prometheus",
wantAggregationTemporality: cumulative,
wantDimensionsCacheSize: 500,
wantResourceAttributesCacheSize: 300,
wantAttachSpanAndTraceID: true,
configFile: "config-2-pipelines.yaml",
wantMetricsExporter: "prometheus",
wantAggregationTemporality: cumulative,
wantDimensionsCacheSize: 500,
wantResourceAttributesCacheSize: 300,
wantAttachSpanAndTraceID: true,
wantInheritInstrumentationLibraryName: true,
},
{
configFile: "config-3-pipelines.yaml",
wantMetricsExporter: "otlp/spanmetrics",
wantAggregationTemporality: cumulative,
wantDimensionsCacheSize: defaultDimensionsCacheSize,
wantResourceAttributesCacheSize: defaultResourceAttributesCacheSize,
wantAttachSpanAndTraceID: false,
configFile: "config-3-pipelines.yaml",
wantMetricsExporter: "otlp/spanmetrics",
wantAggregationTemporality: cumulative,
wantDimensionsCacheSize: defaultDimensionsCacheSize,
wantResourceAttributesCacheSize: defaultResourceAttributesCacheSize,
wantAttachSpanAndTraceID: false,
wantInheritInstrumentationLibraryName: false,
},
{
configFile: "config-full.yaml",
Expand All @@ -85,9 +88,10 @@ func TestLoadConfig(t *testing.T) {
{"region", &defaultRegion},
{"host_id", nil},
},
wantResourceAttributesCacheSize: 3000,
wantAggregationTemporality: delta,
wantAttachSpanAndTraceID: false,
wantResourceAttributesCacheSize: 3000,
wantAggregationTemporality: delta,
wantAttachSpanAndTraceID: false,
wantInheritInstrumentationLibraryName: false,
},
}
for _, tc := range testcases {
Expand Down Expand Up @@ -116,15 +120,16 @@ func TestLoadConfig(t *testing.T) {
require.NotNil(t, cfg)
assert.Equal(t,
&Config{
ProcessorSettings: config.NewProcessorSettings(config.NewID(typeStr)),
MetricsExporter: tc.wantMetricsExporter,
LatencyHistogramBuckets: tc.wantLatencyHistogramBuckets,
Dimensions: tc.wantDimensions,
DimensionsCacheSize: tc.wantDimensionsCacheSize,
ResourceAttributes: tc.wantResourceAttributes,
ResourceAttributesCacheSize: tc.wantResourceAttributesCacheSize,
AggregationTemporality: tc.wantAggregationTemporality,
AttachSpanAndTraceID: tc.wantAttachSpanAndTraceID,
ProcessorSettings: config.NewProcessorSettings(config.NewID(typeStr)),
MetricsExporter: tc.wantMetricsExporter,
LatencyHistogramBuckets: tc.wantLatencyHistogramBuckets,
Dimensions: tc.wantDimensions,
DimensionsCacheSize: tc.wantDimensionsCacheSize,
ResourceAttributes: tc.wantResourceAttributes,
ResourceAttributesCacheSize: tc.wantResourceAttributesCacheSize,
AggregationTemporality: tc.wantAggregationTemporality,
AttachSpanAndTraceID: tc.wantAttachSpanAndTraceID,
InheritInstrumentationLibraryName: tc.wantInheritInstrumentationLibraryName,
},
cfg.Processors[config.NewID(typeStr)],
)
Expand Down
Loading