Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extensions/file_storage: headerssetterextension + file_storage not working for otlp grpc exporter #11780

Open
varkey98 opened this issue Oct 25, 2024 · 12 comments
Assignees
Labels
bug Something isn't working

Comments

@varkey98
Copy link

varkey98 commented Oct 25, 2024

Component(s)

extension/headerssetter, extension/storage/filestorage, exporter/otlpgrpc

What happened?

Description

When headers setter extension is used in conjunction with file_storage extension, the headers (from context variables) are not being set for the traces exported by otlp grpc trace exporter.

Steps to Reproduce

Setup file_storage extension and headerssetterextensior for adding a header from client metadata.

Expected Result

The otlp server will receive the header value from client metadata in the context.

Actual Result

Getting an empty value for the key.

Collector version

0.112.0

Environment information

Environment

OS: Mac OS X M1
Docker image version: otel/opentelemetry-collector-contrib:0.112.0

OpenTelemetry Collector configuration

extensions:
  file_storage:
    directory: /tmp/
    timeout: 1s
    create_directory: true
    compaction:
      on_start: true
      on_rebound: true
      directory: /tmp/
      max_transaction_size: 65_536
  headers_setter:
    headers:
      - action: insert
        key: agent-token
        from_context: agent-token

receivers:
  otlp:
    protocols:
      grpc:
        include_metadata: true
        endpoint: 0.0.0.0:4317
      http:
        include_metadata: true
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 200ms
    send_batch_size: 8192
    send_batch_max_size: 10000
    metadata_keys:
      - agent-token
    metadata_cardinality_limit: 10

exporters:
  otlp:
    endpoint: host.docker.internal:5317
    compression: none
    sending_queue:
      storage: file_storage
    auth:
      authenticator: headers_setter
    tls:
      insecure: true
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]
  extensions: [headers_setter, file_storage]

Log output

varkeychanjacob@Varkeychans-Laptop *** % docker run --rm -it -v /Users/varkeychanjacob/Projects/****/otelcolconfig.yaml:/etc/otelcol-contrib/config.yaml -p 4317:4317 -v /Users/varkeychanjacob/Projects/***/:/tmp  otel/opentelemetry-collector-contrib:latest
2024-10-25T19:48:38.145Z        info    service@v0.112.0/service.go:135 Setting up own telemetry...
2024-10-25T19:48:38.145Z        info    telemetry/metrics.go:70 Serving metrics {"address": "localhost:8888", "metrics level": "Normal"}
2024-10-25T19:48:38.147Z        info    service@v0.112.0/service.go:207 Starting otelcol-contrib...     {"Version": "0.112.0", "NumCPU": 10}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:39     Starting extensions...
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:42     Extension is starting...        {"kind": "extension", "name": "headers_setter"}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:59     Extension started.      {"kind": "extension", "name": "headers_setter"}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:42     Extension is starting...        {"kind": "extension", "name": "file_storage"}
2024-10-25T19:48:38.147Z        info    extensions/extensions.go:59     Extension started.      {"kind": "extension", "name": "file_storage"}
2024-10-25T19:48:38.149Z        info    filestorage@v0.112.0/client.go:235      finished compaction     {"kind": "extension", "name": "file_storage", "directory": "/tmp/exporter_otlp__traces", "elapsed": 0.000651583}
2024-10-25T19:48:38.149Z        warn    internal@v0.112.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks.       {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-10-25T19:48:38.149Z        info    otlpreceiver@v0.112.0/otlp.go:112       Starting GRPC server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"}
2024-10-25T19:48:38.149Z        warn    internal@v0.112.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks.       {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2024-10-25T19:48:38.149Z        info    otlpreceiver@v0.112.0/otlp.go:169       Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
2024-10-25T19:48:38.149Z        info    service@v0.112.0/service.go:230 Everything is ready. Begin running and processing data.
^C2024-10-25T19:50:27.492Z      info    otelcol@v0.112.0/collector.go:328       Received signal from OS {"signal": "interrupt"}
2024-10-25T19:50:27.493Z        info    service@v0.112.0/service.go:266 Starting shutdown...
2024-10-25T19:50:27.505Z        info    extensions/extensions.go:66     Stopping extensions...
2024-10-25T19:50:27.506Z        info    service@v0.112.0/service.go:280 Shutdown complete.

Additional context

No response

@varkey98 varkey98 added the bug Something isn't working label Oct 25, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@VihasMakwana
Copy link
Contributor

I'll try to reproduce this and keep you posted.

@varkey98
Copy link
Author

I'll try to reproduce this and keep you posted.

Thank you

@jpkrohling
Copy link
Member

jpkrohling commented Dec 2, 2024

I think this isn't even a problem with the file storage, I think this is deeper, at the persistent queue from exporterqueue. I'm transferring this to the core repo.

@jpkrohling jpkrohling transferred this issue from open-telemetry/opentelemetry-collector-contrib Dec 2, 2024
@VihasMakwana
Copy link
Contributor

Thanks @jpkrohling for transferring.
I can take this up. Can someone from @open-telemetry/collector-triagers assign it to me?

@varkey98
Copy link
Author

Hi, any updates on this issue?

@dmitryax
Copy link
Member

Persistent storage currently doesn't propagate the context, so all the metadata is dropped.

@VihasMakwana did you have a chance to start working on it? If not, I can help.

@VihasMakwana
Copy link
Contributor

@dmitryax I started working on it but I got busy with some other tasks. My approach was to store context data as well when we save the data to the persistence queue. Do you have something similar in mind?

@dmitryax
Copy link
Member

My approach was to store context data as well when we save the data to the persistence queue. Do you have something similar in mind?

The whole context? How would you serialize it?

@VihasMakwana
Copy link
Contributor

Well, not the whole context. I was planning to serialise only the metadata

type Info struct {
// Addr for the client connecting to this collector. Available in a
// best-effort basis, and generally reliable for receivers making use of
// confighttp.ToServer and configgrpc.ToServerOption.
Addr net.Addr
// Auth information from the incoming request as provided by
// configauth.ServerAuthenticator implementations tied to the receiver for
// this connection.
Auth AuthData
// Metadata is the request metadata from the client connecting to this connector.
Metadata Metadata
}

Once we read data from persistence queue, we would also read the metadata and create a new context using

// NewContext takes an existing context and derives a new context with the
// client.Info value stored on it.
func NewContext(ctx context.Context, c Info) context.Context {
return context.WithValue(ctx, ctxKey{}, c)
}

@dmitryax
Copy link
Member

I was thinking of making it configurable if users are interested in keeping only particular metadata keys, but we can probably start with writing all metadata on disk. It shouldn't be a huge impact.

@VihasMakwana
Copy link
Contributor

VihasMakwana commented Feb 19, 2025

@dmitryax Yup. I'll start working on this soon and roll out a draft. Thanks for your comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants