Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open Telemetry headers setter extension not working when any processor is added to the trace pipeline #29852

Open
SimranCode opened this issue Dec 13, 2023 · 12 comments
Assignees
Labels
enhancement New feature or request extension/headerssetter help wanted Extra attention is needed processor/tailsampling Tail sampling processor

Comments

@SimranCode
Copy link

Component(s)

No response

What happened?

When I use Headers Setter extension to set the Tenant ID and Sampling Processor or any processor for that matter for Traces
in Otel Collector Config, the data doesnt get exported to the endpoint.

Both of these features works fine when added individually.

Could you provide any workaround for this if there is no fix available for now?

Collector version

0.91

Environment information

No response

OpenTelemetry Collector configuration

extensions:
  headers_setter:
    headers:
      - action: upsert
        key: X-Scope-OrgID
        from_context: tenantid
receivers:
  otlp:
    protocols:
      grpc:
        include_metadata: true
        endpoint: 0.0.0.0:5555
      http:
        include_metadata: true
      

exporters:
  prometheus:
    endpoint: "0.0.0.0:8889"
    const_labels:
      tenant: tenant1
    
  otlp:
    endpoint: "tempo:4317"
    tls:
      insecure: true
    auth:
      authenticator: headers_setter

  loki:
    endpoint: "http://loki:3100/loki/api/v1/push"
    auth:
      authenticator: headers_setter 


processors:
  tail_sampling:
    decision_wait: 1s
    num_traces: 1000
    expected_new_traces_per_sec: 100
    policies:
      [
          {
            name: latency-policy,
            type: latency,
            latency: {threshold_ms: 1000}
          }        
                  
      ]
               
  transform:
    metric_statements:
      - context: datapoint
        statements:
        - set(attributes["tenantname"], resource.attributes["tenantid"])
  batch:
    timeout: 1s
    send_batch_size: 1024
    
service:
  extensions: [headers_setter]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [tail_sampling]
      exporters: [otlp]
    metrics:
      receivers: [otlp]
      processors: [transform]
      exporters: [prometheus]
    logs:
      receivers: [otlp]
      processors: []
      exporters: [loki]

Log output

No response

Additional context

No response

@SimranCode SimranCode added bug Something isn't working needs triage New item requiring triage labels Dec 13, 2023
Copy link
Contributor

Pinging code owners for extension/headerssetter: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@jpkrohling
Copy link
Member

Can you test replacing the tail-sampling with the transform processor in the traces pipeline and see if it works? Header setter will not work with processors that reassemble incoming batches, such as the batch, group by, or tail-sampling processors.

@SimranCode
Copy link
Author

Thanks for a quick response !!

Is there any workaround by which we can use both Sampling Processor and headers setter extension ?

@SimranCode
Copy link
Author

@jpkrohling - Is there any other way possible to Sample the data and implement multitenancy at the same time?

@jpkrohling
Copy link
Member

Right now, I don't think it's possible to keep the tenant information from your original context after the telemetry data goes through a tail-sampling processor. We have discussed that in the past (cc @bogdandrutu and @tigrannajaryan), especially around adding a TContext (telemetry context) in pdata, but that never got past the PoC stage.

@crobert-1 crobert-1 added enhancement New feature or request processor/tailsampling Tail sampling processor and removed bug Something isn't working needs triage New item requiring triage labels Feb 6, 2024
Copy link
Contributor

github-actions bot commented Feb 6, 2024

Pinging code owners for processor/tailsampling: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Apr 8, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Apr 8, 2024
@jpkrohling
Copy link
Member

We could use a similar approach to the batching processor, which allows keys to be added to the resulting context.

@jpkrohling jpkrohling removed the Stale label Apr 30, 2024
Copy link
Contributor

github-actions bot commented Jul 1, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jul 1, 2024
@jpkrohling jpkrohling added help wanted Extra attention is needed and removed Stale labels Jul 8, 2024
@stanislawcabalasamsung
Copy link

stanislawcabalasamsung commented Jul 12, 2024

Hello,
I've encountered this issue. I've been using batch processor succesfully with passing required context. After adding tailsampling processor, it does not pass context , as noted in this issue.

I think that implementing similar feature, as in batch processor is good idea. Is this something considered to be implemneted? If so, is there any estimate when it might be done?

@jpkrohling jpkrohling self-assigned this Jul 15, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Sep 16, 2024
@Joufu
Copy link

Joufu commented Oct 14, 2024

Hello, we are facing the same issue, tail_sampling processor does not preserve metadata as for example like batch processor does.
Are there any considerations of implementing something similar in tail_sampling processor as in batch ?

@github-actions github-actions bot removed the Stale label Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request extension/headerssetter help wanted Extra attention is needed processor/tailsampling Tail sampling processor
Projects
None yet
Development

No branches or pull requests

5 participants