Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[processor/metricstransform] aggregate_labels doesn't aggregate Delta counter with different timestamps #31791

Closed
yuri-rs opened this issue Mar 16, 2024 · 3 comments
Labels
bug Something isn't working needs triage New item requiring triage processor/metricstransform Metrics Transform processor

Comments

@yuri-rs
Copy link
Contributor

yuri-rs commented Mar 16, 2024

Component(s)

processor/metricstransform

What happened?

Description

I run into the same issue as #12611 but for Delta counter.
I'd like to aggregate Delta counter datapoints with different timestamps into a single datapoint, but it is not possible because of https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/metricstransformprocessor/operation_aggregate_labels.go#L53.

What is the reason to honor timestamp for Delta counters and ignore for Cumulative?
Maybe it would be better to have a configuration option to enable/disable timestamp grouping, so users would be able to explicitly choose it?

Steps to Reproduce

Run collector with config provided

Expected Result

Datapoints for "otlp.collector.metric.count" metric groupped into single datapoint

Actual Result

All 5 datapoints remains separate (because of different timestamp)

Collector version

v0.88.1-0.20231026220224-6405e152a2d9

Environment information

Environment

OS: macOS 14.4
Running on docker

OpenTelemetry Collector configuration

receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: 'otel-collector'
          scrape_interval: 1s
          static_configs:
            - targets: ['0.0.0.0:8888']

exporters:
  logging:
    verbosity: detailed

processors:
  groupbyattrs:
  batch:
    timeout: 5s
  resource/personal_metrics:
    attributes:
      - action: upsert
        value: "local"
        key: service.name
  transform/count_metric:
    metric_statements:
      - context: resource
        statements:
          - keep_keys(attributes, ["service.name"])
  metricstransform/count_metric:
    transforms:
      - include: "otlp.collector.metric.count"
        match_type: strict
        action: update
        operations:
          - action: aggregate_labels
            aggregation_type: sum
            label_set: []

connectors:
  count:
    metrics:
      otlp.collector.metric.count:
    datapoints:
      otlp.collector.metric.data_point.count:

service:
  pipelines:
    metrics/internal:
      receivers: [prometheus]
      processors: [resource/personal_metrics]
      exporters: [count]
    metrics/count:
      receivers: [ count]
      processors: [ transform/count_metric, batch, groupbyattrs, metricstransform/count_metric ]
      exporters: [ logging ]

  telemetry:
    logs:
      level: debug

Log output

2024-03-16T16:16:57.126Z	info	MetricsExporter	{"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 2, "data points": 10}
2024-03-16T16:16:57.129Z	info	ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(local)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope otelcol/countconnector 
Metric #0
Descriptor:
     -> Name: otlp.collector.metric.count
     -> Description: 
     -> Unit: 
     -> DataType: Sum
     -> IsMonotonic: true
     -> AggregationTemporality: Delta
NumberDataPoints #0
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-03-16 16:16:54.810260818 +0000 UTC
Value: 13
NumberDataPoints #1
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-03-16 16:16:55.80622129 +0000 UTC
Value: 13
NumberDataPoints #2
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-03-16 16:16:56.795377829 +0000 UTC
Value: 13
NumberDataPoints #3
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-03-16 16:16:52.853056665 +0000 UTC
Value: 11
NumberDataPoints #4
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-03-16 16:16:53.811284266 +0000 UTC
Value: 13
Metric #1
Descriptor:
     -> Name: otlp.collector.metric.data_point.count
...

Additional context

No response

@yuri-rs yuri-rs added bug Something isn't working needs triage New item requiring triage labels Mar 16, 2024
@github-actions github-actions bot added the processor/metricstransform Metrics Transform processor label Mar 16, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@yuri-rs
Copy link
Contributor Author

yuri-rs commented Mar 16, 2024

I could work on configuration option PR if you will find this idea reasonable

@yuri-rs
Copy link
Contributor Author

yuri-rs commented Mar 21, 2024

In my example, the timestamp is different for data points, which is why they were not aggregated into a single data point. This is acceptable.
I achieved my aggregation goal by adding the transform/TruncateTime processor.
The issue described here is incorrect; therefore, I will close the issue. Apologies for the confusion.
I still don't understand why StartTimestamp is handled differently for Cumulative and Delta counters, but that is a separate matter.

@yuri-rs yuri-rs closed this as completed Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage New item requiring triage processor/metricstransform Metrics Transform processor
Projects
None yet
Development

No branches or pull requests

1 participant