Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to find complete exported spans in OpenSearch Backend #36136

Open
charan906 opened this issue Oct 30, 2024 · 5 comments
Open

Unable to find complete exported spans in OpenSearch Backend #36136

charan906 opened this issue Oct 30, 2024 · 5 comments
Labels
bug Something isn't working exporter/opensearch needs triage New item requiring triage

Comments

@charan906
Copy link

I have generated 20000 spans , spans are received by Open telemetry collector, but during export when i checked logs of collector pod every times it got struct at particular span number , with thisIi cannot see complete span data in the backend

this is the configuration i am using , with otel_version="v0.107.0" and its a Customized binary its contains both core and contrib repository plugins

configuration:
exporters:
debug:
verbosity: detailed
opensearch:
http:
endpoint: ${env:SE_SERVER_URLS}
tls:
ca_file: ${env:ROOT_CA_CERT}
cert_file: ${env:CLIENT_CRT}
key_file: ${env:CLIENT_KEY}
extensions:
memory_ballast: {}
health_check:
endpoint: ${env:MY_POD_IP}:13133
jaegerremotesampling:
source:
reload_interval: 0s
file: /etc/sampling/samplingstrategies.json
processors:
batch: {}
memory_limiter:
# Check_interval is the time between measurements of memory usage.
check_interval: 5s
# By default limit_mib is set to 80% of ".Values.resources.limits.memory"
limit_percentage: 80
# By default spike_limit_mib is set to 25% of ".Values.resources.limits.memory"
spike_limit_percentage: 25
probabilistic_sampler:
sampling_percentage: 100
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:MY_POD_IP}:4317
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
http:
endpoint: ${env:MY_POD_IP}:4318
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
service:
extensions:
- memory_ballast
- health_check
- jaegerremotesampling
pipelines:
traces:
exporters:
- debug
- opensearch
processors:
- memory_limiter
- batch
- probabilistic_sampler
receivers:
- otlp

Kubernetes resources specifications:

resources:
telemetry-collector:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m

Server starting logs:

2024-10-30T12:15:48.331Z info memorylimiter/memorylimiter.go:151 Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "traces", "total_memory_mib": 128, "limit_percentage": 80, "spike_limit_percentage": 25}
2024-10-30T12:15:48.331Z info memorylimiter/memorylimiter.go:75 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "traces", "limit_mib": 102, "spike_limit_mib": 32, "check_interval": 5}
2024-10-30T12:15:48.333Z info service@v0.107.0/service.go:195 Starting otelcol-custom... {"Version": "1.0.0", "NumCPU": 8}

@charan906 charan906 added the bug Something isn't working label Oct 30, 2024
@codeboten
Copy link
Contributor

This appears to possibly be an issue with the opensearch exporter, transferring to the contrib repo where that exporter lives

@codeboten codeboten transferred this issue from open-telemetry/opentelemetry-collector Nov 1, 2024
@codeboten codeboten added needs triage New item requiring triage exporter/opensearch labels Nov 1, 2024
Copy link
Contributor

github-actions bot commented Nov 1, 2024

Pinging code owners for exporter/opensearch: @Aneurysm9 @MitchellGale @MaxKsyunz @YANG-DB. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@YANG-DB
Copy link
Contributor

YANG-DB commented Nov 19, 2024

@codeboten we are reviewing this issue and will respond shortly

@charan906
Copy link
Author

Hello @codeboten any findings on this issue .

@ps48
Copy link

ps48 commented Dec 3, 2024

Hi @charan906,

We’re currently unable to replicate this issue on our side. We tested using the latest build, but it doesn’t seem like there have been recent changes to the OpenSearch exporter. Could you please help us by providing more details? Specifically:

  • You mentioned that the pod gets stuck at a particular span number—could you let us know what that number is?
  • Are there any errors in the OpenSearch logs or in the Collector pod’s error logs that might help us debug the issue?

I’m not entirely familiar with the chunking mechanism used by the OpenSearch exporter for bulk requests, but it seems like a potential factor. Additionally, have you observed the same issue when the following processor configuration is removed?

memory_limiter:
  # Interval between memory usage checks
  check_interval: 5s
  # Default limit is 80% of ".Values.resources.limits.memory"
  limit_percentage: 80
  # Default spike limit is 25% of ".Values.resources.limits.memory"
  spike_limit_percentage: 25

probabilistic_sampler:
  sampling_percentage: 100

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/opensearch needs triage New item requiring triage
Projects
None yet
Development

No branches or pull requests

4 participants