You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an application sending OTLP data to the collector via gRPC. We then send that from the collector to our Splunk backend. The collector seems to stop sending any logs/traces/metrics and both the collector and our application eventually crash.
Steps to Reproduce
Send OTLP via gRPC to the collector
Expected Result
The collector produces no warnings and doesn't crash
Actual Result
The collector prints the following warning over and over
warn zapgrpc/zapgrpc.go:195 [transport] transport: http2Server.HandleStreams failed to read frame: connection error: COMPRESSION_ERROR {"grpc_log": true}
and then eventually crashes
Collector version
v0.68.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04") Windows Server 2019
Compiler(if manually compiled): go version go1.19.5 windows/amd64
OpenTelemetry Collector configuration
extensions:
# Enables health check endpoint for otel collector - https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/healthcheckextensionhealth_check:
# Opens up zpages for dev/debugging - https://github.com/open-telemetry/opentelemetry-collector/tree/main/extension/zpagesextensionzpages:
endpoint: localhost:55679receivers:
# For dotnet appsotlp:
protocols:
grpc:
http:
# FluentDfluentforward:
endpoint: 0.0.0.0:8006# Otel Internal Metricsprometheus:
config:
scrape_configs:
- job_name: 'otelcol'# Gets mapped to service.namescrape_interval: 10sstatic_configs:
- targets: ['0.0.0.0:8888']# System Metricshostmetrics:
collection_interval: 10sscrapers:
cpu:
disk:
filesystem:
memory:
network:
# System load average metrics https://en.wikipedia.org/wiki/Load_(computing)load:
# Paging/Swap space utilization and I/O metricspaging:
# Aggregated system process count metricsprocesses:
# System processes metrics, disabled by default# process: processors:
batch: # Batches data when sendingresourcedetection:
detectors: [gce, ecs, ec2, azure, system]timeout: 2soverride: falsetransform/body-empty:
log_statements:
- context: logstatements:
- set(body, "body-empty") where body == nilgroupbyattrs:
keys:
- service.name
- service.version
- host.name# Enabling the memory_limiter is strongly recommended for every pipeline.# Configuration is based on the amount of memory allocated to the collector.# For more information about memory limiter, see# https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiter/README.mdmemory_limiter:
check_interval: 2slimit_mib: 256exporters:
splunk_hec/logs:
token: hiddenendpoint: hiddenindex: hiddenmax_connections: 20disable_compression: falsetimeout: 10stls:
insecure_skip_verify: trueca_file: ""cert_file: ""key_file: ""splunk_hec/traces:
token: hiddenendpoint: hiddenindex: hiddenmax_connections: 20disable_compression: falsetimeout: 10stls:
insecure_skip_verify: trueca_file: ""cert_file: ""key_file: ""splunk_hec/metrics:
token: hiddenendpoint: hiddenindex: hiddenmax_connections: 20disable_compression: falsetimeout: 10stls:
insecure_skip_verify: trueca_file: ""cert_file: ""key_file: ""service:
# zpages port : 55679pipelines:
logs:
receivers: [otlp, fluentforward]processors: [resourcedetection, transform/body-empty, groupbyattrs, memory_limiter, batch]exporters: [splunk_hec/logs]metrics:
receivers: [otlp, hostmetrics]processors: [resourcedetection, groupbyattrs, memory_limiter, batch]exporters: [splunk_hec/metrics]traces:
receivers: [otlp]processors: [resourcedetection, groupbyattrs, memory_limiter, batch]exporters: [splunk_hec/traces]
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Component(s)
No response
What happened?
Description
We have an application sending OTLP data to the collector via gRPC. We then send that from the collector to our Splunk backend. The collector seems to stop sending any logs/traces/metrics and both the collector and our application eventually crash.
Steps to Reproduce
Send OTLP via gRPC to the collector
Expected Result
The collector produces no warnings and doesn't crash
Actual Result
The collector prints the following warning over and over
warn zapgrpc/zapgrpc.go:195 [transport] transport: http2Server.HandleStreams failed to read frame: connection error: COMPRESSION_ERROR {"grpc_log": true}
and then eventually crashes
Collector version
v0.68.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04") Windows Server 2019
Compiler(if manually compiled): go version go1.19.5 windows/amd64
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: