-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Istio Traces Aren't Being Exported with Azure Monitor Exporter #35037
Labels
Comments
whitneygriffith
added
bug
Something isn't working
needs triage
New item requiring triage
labels
Sep 5, 2024
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This was referenced Sep 10, 2024
This was referenced Sep 24, 2024
This was referenced Oct 8, 2024
whitneygriffith
changed the title
Istio Traces Aren't Being Exported with Azure Monitor Exporter - Suspected B3 to W3C Trace Context Conversion Issue
Istio Traces Aren't Being Exported with Azure Monitor Exporter
Oct 29, 2024
This was referenced Nov 5, 2024
shivanthzen
pushed a commit
to shivanthzen/opentelemetry-collector-contrib
that referenced
this issue
Dec 5, 2024
…emonitor (open-telemetry#36520) #### Description 1. Resolved an issue where traces weren't being sent to App Insights due to not flushing the Telemetry Channel. Added the necessary flush operation to ensure all traces, metrics and logs are properly sent to the queue, leveraging App Insights' batch handling for more efficient processing. #### Link to tracking issue Resolves open-telemetry#35037 --------- Signed-off-by: whitneygriffith <whitney.griffith16@gmail.com> Co-authored-by: Andrzej Stencel <andrzej.stencel@elastic.co>
ZenoCC-Peng
pushed a commit
to ZenoCC-Peng/opentelemetry-collector-contrib
that referenced
this issue
Dec 6, 2024
…emonitor (open-telemetry#36520) #### Description 1. Resolved an issue where traces weren't being sent to App Insights due to not flushing the Telemetry Channel. Added the necessary flush operation to ensure all traces, metrics and logs are properly sent to the queue, leveraging App Insights' batch handling for more efficient processing. #### Link to tracking issue Resolves open-telemetry#35037 --------- Signed-off-by: whitneygriffith <whitney.griffith16@gmail.com> Co-authored-by: Andrzej Stencel <andrzej.stencel@elastic.co>
sbylica-splunk
pushed a commit
to sbylica-splunk/opentelemetry-collector-contrib
that referenced
this issue
Dec 17, 2024
…emonitor (open-telemetry#36520) #### Description 1. Resolved an issue where traces weren't being sent to App Insights due to not flushing the Telemetry Channel. Added the necessary flush operation to ensure all traces, metrics and logs are properly sent to the queue, leveraging App Insights' batch handling for more efficient processing. #### Link to tracking issue Resolves open-telemetry#35037 --------- Signed-off-by: whitneygriffith <whitney.griffith16@gmail.com> Co-authored-by: Andrzej Stencel <andrzej.stencel@elastic.co>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Component(s)
exporter/azuremonitor
What happened?
Description
When using Istio with the OpenTelemetry Azure Monitor exporter, traces are not being propagated to Azure Monitor.
Steps to Reproduce
Expected Result
Traces generated by Istio should appear correctly in Application Insights with proper formatting and trace context propagation using W3C Trace Context headers.
Actual Result
Traces are not appearing in Application Insights suggesting a failure to convert B3 headers into the W3C Trace Context format. No logging provided that shows the trace data was rejected by Application Insights
Collector version
923eb1cf
Environment information
Environment
OS: Ubuntu 22.04.04
Istio installed with Helm
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install istio-base istio/base -n istio-system --create-namespace
helm install istiod istio/istiod -n istio-system --wait
helm status istiod -n istio-system
Configure Providers
kubectl get configmap istio -n istio-system -o yaml > configmap.yaml
Update configmap for grpc OTLP format Traces
mesh: |-
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
tracing: {}
defaultProviders:
metrics:
- prometheus
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
enableTracing: true
extensionProviders:
- name: otel-tracing
opentelemetry:
port: 4317
service: opentelemetry-collector.otel.svc.cluster.local
grpc: {}
kubectl apply -f configmap.yaml
Install Otel Collector
kubectl create namespace otel
kubectl label namespace otel istio-injection=enabled
kubectl apply -f otel-collector-contrib.yaml -n otel
Set up demo
kubectl create namespace demo
kubectl label namespace demo istio-injection=enabled
Create Telemetry Rule
cat < tel-rule-otel-tracing.yaml
apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
name: otel-tracing
namespace: demo
spec:
tracing:
randomSamplingPercentage: 100
customTags:
"app-insights":
literal:
value: "from-otel-collector"
EOF
kubectl apply -f tel-rule-otel-tracing.yaml
Generate and view traces
kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml -n demo
kubectl get pods -n demo
Generate traces
for i in$(seq 1 100); do kubectl exec "$ (kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}' -n demo)" -c ratings -n demo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"; done
Verify traces on console
kubectl logs -n otel "$(kubectl get pods -n otel -l app=opentelemetry-collector -o jsonpath='{.items[0].metadata.name}')" | grep "app-insights"
Verify traces on Application Insights
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: