Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio Traces Aren't Being Exported with Azure Monitor Exporter #35037

Closed
whitneygriffith opened this issue Sep 5, 2024 · 1 comment · Fixed by #36520
Closed

Istio Traces Aren't Being Exported with Azure Monitor Exporter #35037

whitneygriffith opened this issue Sep 5, 2024 · 1 comment · Fixed by #36520
Labels
bug Something isn't working exporter/azuremonitor needs triage New item requiring triage

Comments

@whitneygriffith
Copy link
Contributor

whitneygriffith commented Sep 5, 2024

Component(s)

exporter/azuremonitor

What happened?

Description

When using Istio with the OpenTelemetry Azure Monitor exporter, traces are not being propagated to Azure Monitor.

Steps to Reproduce

  1. Set up an Istio service mesh with tracing enabled.
  2. Deploy the OpenTelemetry Collector with the azuremonitor exporter to send trace data to Application Insights.
  3. Generate traffic within the mesh and observe the trace data in Application Insights

Expected Result

Traces generated by Istio should appear correctly in Application Insights with proper formatting and trace context propagation using W3C Trace Context headers.

Actual Result

Traces are not appearing in Application Insights suggesting a failure to convert B3 headers into the W3C Trace Context format. No logging provided that shows the trace data was rejected by Application Insights

Collector version

923eb1cf

Environment information

Environment

OS: Ubuntu 22.04.04

Istio installed with Helm

helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update

helm install istio-base istio/base -n istio-system --create-namespace
helm install istiod istio/istiod -n istio-system --wait
helm status istiod -n istio-system

Configure Providers

kubectl get configmap istio -n istio-system -o yaml > configmap.yaml

Update configmap for grpc OTLP format Traces

mesh: |-
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
tracing: {}
defaultProviders:
metrics:
- prometheus
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
enableTracing: true
extensionProviders:
- name: otel-tracing
opentelemetry:
port: 4317
service: opentelemetry-collector.otel.svc.cluster.local
grpc: {}

kubectl apply -f configmap.yaml

Install Otel Collector

kubectl create namespace otel
kubectl label namespace otel istio-injection=enabled

cat <<EOF > otel-collector-contrib.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: opentelemetry-collector-conf
  labels:
    app: opentelemetry-collector
data:
  opentelemetry-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
      azuremonitor:
        connection_string: "InstrumentationKey="
        spaneventsenabled: true
        maxbatchinterval: .05s
        sending_queue:
          enabled: true
          num_consumers: 10
          queue_size: 2
    extensions:
      health_check:
        port: 13133
    service:
      extensions:
      - health_check
      telemetry:
        logs:
          debug:
            verbosity: detailed
      pipelines:
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging, azuremonitor]
        traces:
          receivers: [otlp]
          exporters: [logging, azuremonitor]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: opentelemetry-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry-collector
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: opentelemetry-collector
    spec:
      containers:
        - name: opentelemetry-collector
          image: otel/opentelemetry-collector-contrib:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 4317
              protocol: TCP
            - containerPort: 4318
              protocol: TCP
          resources:
            limits:
              cpu: "2"
              memory: 4Gi
            requests:
              cpu: 200m
              memory: 400Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - name: opentelemetry-collector-config-vol
              mountPath: /etc/otel
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 420
            items:
              - key: opentelemetry-collector-config
                path: opentelemetry-collector-config.yaml
            name: opentelemetry-collector-conf
          name: opentelemetry-collector-config-vol
---
apiVersion: v1
kind: Service
metadata:
  name: opentelemetry-collector
  labels:
    app: opentelemetry-collector
spec:
  ports:
    - name: grpc-otlp # Default endpoint for OpenTelemetry receiver.
      port: 4317
      protocol: TCP
      targetPort: 4317
    - name: http-otlp # HTTP endpoint for OpenTelemetry receiver.
      port: 4318
      protocol: TCP
      targetPort: 4318
  selector:
    app: opentelemetry-collector
EOF

kubectl apply -f otel-collector-contrib.yaml -n otel

Set up demo

kubectl create namespace demo
kubectl label namespace demo istio-injection=enabled

Create Telemetry Rule

cat < tel-rule-otel-tracing.yaml
apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
name: otel-tracing
namespace: demo
spec:
tracing:

  • providers:
    • name: otel-tracing
      randomSamplingPercentage: 100
      customTags:
      "app-insights":
      literal:
      value: "from-otel-collector"
      EOF

kubectl apply -f tel-rule-otel-tracing.yaml

Generate and view traces

kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml -n demo
kubectl get pods -n demo

Generate traces

for i in $(seq 1 100); do kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}' -n demo)" -c ratings -n demo -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"; done

Verify traces on console

kubectl logs -n otel "$(kubectl get pods -n otel -l app=opentelemetry-collector -o jsonpath='{.items[0].metadata.name}')" | grep "app-insights"

Verify traces on Application Insights

OpenTelemetry Collector configuration

receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
      azuremonitor:
        connection_string: "InstrumentationKey="
        spaneventsenabled: true
        maxbatchinterval: .05s
        sending_queue:
          enabled: true
          num_consumers: 10
          queue_size: 2
    extensions:
      health_check:
        port: 13133
    service:
      extensions:
      - health_check
      telemetry:
        logs:
          debug:
            verbosity: detailed
      pipelines:
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging, azuremonitor]
        traces:
          receivers: [otlp]
          exporters: [logging, azuremonitor]

Log output

2024-09-04T15:46:16.212Z    info    TracesExporter    {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 1}
2024-09-04T15:46:16.212Z    info    ResourceSpans #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(details.demo)
ScopeSpans #0
ScopeSpans SchemaURL: 
InstrumentationScope  
Span #0
    Trace ID       : e7e12ca17cf7655ff46ecd79cec9451c
    Parent ID      : 9e8b852dba31ab84
    ID             : 05226d559db0c33d
    Name           : details.demo.svc.cluster.local:9080/*
    Kind           : Server
    Start time     : 2024-09-04 15:46:14.652164 +0000 UTC
    End time       : 2024-09-04 15:46:14.655316 +0000 UTC
    Status code    : Unset
    Status message : 
Attributes:
     -> node_id: Str(sidecar~10.244.0.13~details-v1-79dfbd6fff-j7jjs.demo~demo.svc.cluster.local)
     -> zone: Str()
     -> guid:x-request-id: Str(d17c4164-5765-9003-a163-04039b0b98e0)
     -> http.url: Str(http://details:9080/details/0)
     -> http.method: Str(GET)
     -> downstream_cluster: Str(-)
     -> user_agent: Str(curl/7.88.1)
     -> http.protocol: Str(HTTP/1.1)
     -> peer.address: Str(10.244.0.18)
     -> request_size: Str(0)
     -> response_size: Str(178)
     -> component: Str(proxy)
     -> upstream_cluster: Str(inbound|9080||)
     -> upstream_cluster.name: Str(inbound|9080||;)
     -> http.status_code: Str(200)
     -> response_flags: Str(-)
     -> istio.mesh_id: Str(cluster.local)
     -> istio.canonical_revision: Str(v1)
     -> istio.canonical_service: Str(details)
     -> app-insights: Str(otel)
     -> istio.cluster_id: Str(Kubernetes)
     -> istio.namespace: Str(demo)
    {"kind": "exporter", "data_type": "traces", "name": "debug"}

Additional context

No response

@whitneygriffith whitneygriffith added bug Something isn't working needs triage New item requiring triage labels Sep 5, 2024
Copy link
Contributor

github-actions bot commented Sep 5, 2024

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@whitneygriffith whitneygriffith changed the title Istio Traces Aren't Being Exported with Azure Monitor Exporter - Suspected B3 to W3C Trace Context Conversion Issue Istio Traces Aren't Being Exported with Azure Monitor Exporter Oct 29, 2024
shivanthzen pushed a commit to shivanthzen/opentelemetry-collector-contrib that referenced this issue Dec 5, 2024
…emonitor (open-telemetry#36520)

#### Description

1. Resolved an issue where traces weren't being sent to App Insights due
to not flushing the Telemetry Channel. Added the necessary flush
operation to ensure all traces, metrics and logs are properly sent to
the queue, leveraging App Insights' batch handling for more efficient
processing.

#### Link to tracking issue
Resolves open-telemetry#35037

---------

Signed-off-by: whitneygriffith <whitney.griffith16@gmail.com>
Co-authored-by: Andrzej Stencel <andrzej.stencel@elastic.co>
ZenoCC-Peng pushed a commit to ZenoCC-Peng/opentelemetry-collector-contrib that referenced this issue Dec 6, 2024
…emonitor (open-telemetry#36520)

#### Description

1. Resolved an issue where traces weren't being sent to App Insights due
to not flushing the Telemetry Channel. Added the necessary flush
operation to ensure all traces, metrics and logs are properly sent to
the queue, leveraging App Insights' batch handling for more efficient
processing.

#### Link to tracking issue
Resolves open-telemetry#35037

---------

Signed-off-by: whitneygriffith <whitney.griffith16@gmail.com>
Co-authored-by: Andrzej Stencel <andrzej.stencel@elastic.co>
sbylica-splunk pushed a commit to sbylica-splunk/opentelemetry-collector-contrib that referenced this issue Dec 17, 2024
…emonitor (open-telemetry#36520)

#### Description

1. Resolved an issue where traces weren't being sent to App Insights due
to not flushing the Telemetry Channel. Added the necessary flush
operation to ensure all traces, metrics and logs are properly sent to
the queue, leveraging App Insights' batch handling for more efficient
processing.

#### Link to tracking issue
Resolves open-telemetry#35037

---------

Signed-off-by: whitneygriffith <whitney.griffith16@gmail.com>
Co-authored-by: Andrzej Stencel <andrzej.stencel@elastic.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/azuremonitor needs triage New item requiring triage
Projects
None yet
1 participant