-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to find complete exported spans in OpenSearch Backend #36136
Comments
This appears to possibly be an issue with the opensearch exporter, transferring to the contrib repo where that exporter lives |
Pinging code owners for exporter/opensearch: @Aneurysm9 @MitchellGale @MaxKsyunz @YANG-DB. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@codeboten we are reviewing this issue and will respond shortly |
Hello @codeboten any findings on this issue . |
Hi @charan906, We’re currently unable to replicate this issue on our side. We tested using the latest build, but it doesn’t seem like there have been recent changes to the OpenSearch exporter. Could you please help us by providing more details? Specifically:
I’m not entirely familiar with the chunking mechanism used by the OpenSearch exporter for bulk requests, but it seems like a potential factor. Additionally, have you observed the same issue when the following processor configuration is removed?
Thanks! |
I have generated 20000 spans , spans are received by Open telemetry collector, but during export when i checked logs of collector pod every times it got struct at particular span number , with thisIi cannot see complete span data in the backend
this is the configuration i am using , with otel_version="v0.107.0" and its a Customized binary its contains both core and contrib repository plugins
configuration:
exporters:
debug:
verbosity: detailed
opensearch:
http:
endpoint: ${env:SE_SERVER_URLS}
tls:
ca_file: ${env:ROOT_CA_CERT}
cert_file: ${env:CLIENT_CRT}
key_file: ${env:CLIENT_KEY}
extensions:
memory_ballast: {}
health_check:
endpoint: ${env:MY_POD_IP}:13133
jaegerremotesampling:
source:
reload_interval: 0s
file: /etc/sampling/samplingstrategies.json
processors:
batch: {}
memory_limiter:
# Check_interval is the time between measurements of memory usage.
check_interval: 5s
# By default limit_mib is set to 80% of ".Values.resources.limits.memory"
limit_percentage: 80
# By default spike_limit_mib is set to 25% of ".Values.resources.limits.memory"
spike_limit_percentage: 25
probabilistic_sampler:
sampling_percentage: 100
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:MY_POD_IP}:4317
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
http:
endpoint: ${env:MY_POD_IP}:4318
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
service:
extensions:
- memory_ballast
- health_check
- jaegerremotesampling
pipelines:
traces:
exporters:
- debug
- opensearch
processors:
- memory_limiter
- batch
- probabilistic_sampler
receivers:
- otlp
Kubernetes resources specifications:
resources:
telemetry-collector:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m
Server starting logs:
2024-10-30T12:15:48.331Z info memorylimiter/memorylimiter.go:151 Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "traces", "total_memory_mib": 128, "limit_percentage": 80, "spike_limit_percentage": 25}
2024-10-30T12:15:48.331Z info memorylimiter/memorylimiter.go:75 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "traces", "limit_mib": 102, "spike_limit_mib": 32, "check_interval": 5}
2024-10-30T12:15:48.333Z info service@v0.107.0/service.go:195 Starting otelcol-custom... {"Version": "1.0.0", "NumCPU": 8}
The text was updated successfully, but these errors were encountered: