Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support k8s log file in syslog exporter #30746

Closed
oppokui opened this issue Jan 24, 2024 · 28 comments
Closed

Support k8s log file in syslog exporter #30746

oppokui opened this issue Jan 24, 2024 · 28 comments
Labels
enhancement New feature or request exporter/syslog needs triage New item requiring triage

Comments

@oppokui
Copy link

oppokui commented Jan 24, 2024

Component(s)

exporter/syslog

Is your feature request related to a problem? Please describe.

We want to collect k8s logs through openTelemetry Collector, and then analyze the logs outside. The initial idea is to setup syslog server and forward logs to HDFS cluster. The openTelemetry Collector helps to forward k8s logs to syslog server.

I create a pipeline with filelog receiver and syslog exporter. But failed to receive any UDP package in external syslog server. It sounds the k8s log is not syslog format then syslog exporter didn't do anything.
I didn't find any syslog processor or formatter in pipeline techs.

    exporters:
      syslog:
        endpoint: 23.21.46.218
        network: udp
        port: 5140
        protocol: rfc5424
        retry_on_failure:
          enabled: true
          initial_interval: 10s
          max_elapsed_time: 150s
          max_interval: 40s
        sending_queue:
          enabled: true
          num_consumers: 20
          queue_size: 10000
        timeout: 1s
    receivers:
      filelog:
        exclude: []
        include:
        - /var/log/pods/*/*/*.log
        include_file_name: false
        include_file_path: true
    service:
      extensions: {}
      pipelines:
        logs:
          exporters:
          - syslog
          receivers:
          - filelog

Any suggestion?

Describe the solution you'd like

Is it possible to create sylog processor like this;

processors:
  syslog:
    input_format: "json"
    output_format: "rfc5424"
    rfc5424:
      app_name: "my-app"
      facility: "local0"

Describe alternatives you've considered

No response

Additional context

No response

@oppokui oppokui added enhancement New feature or request needs triage New item requiring triage labels Jan 24, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

Pinging code owners for receiver/filelog: @djaglowski. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

Hello @oppokui, since there are a few moving parts here it would be good to try to narrow down which component isn't working as expected. It sounds like you're not receiving logs at your syslog server, is that the issue you're trying to resolve?

Is the collector logging any relevant messages about the syslog exporter?

Can you please set up your collector using the debug exporter instead of the syslog exporter, and confirm you're seeing the logs you expect in the output? This will help us determine if there's a problem with your receiver configuration, or something wrong with the exporter configuration.

@djaglowski
Copy link
Member

In addition to @crobert-1's suggestion, it may be helpful to try sending the logs back to a syslog receiver on the same collector. This loopback test could remove your backend from the equation and demonstrate whether the exporter is working correctly.

receivers:
  filelog: ... # reads original logs
  syslog: ... # reads from syslog exporter

exporters:
  syslog: ... # sends to syslog receiver
  debug: ... # writes final result

service:
  logs/in:
    receivers: [ filelog ]
    exporters: [ syslog ]
  logs/out:
    receivers: [ syslog ]
    exporters: [ debug ]    

@oppokui
Copy link
Author

oppokui commented Jan 29, 2024

Thanks, @crobert-1 and @djaglowski. I am pretty sure filelog generate events, just no data coming out of syslog.

I do try the debug and see event printed in the following way:

      pipelines:
        logs:
          exporters:
          - syslog
          - debug
          receivers:
          - filelog

The detail log printed like this:

Trace ID: 
Span ID: 
Flags: 0
LogRecord #13
ObservedTimestamp: 2024-01-23 11:30:27.201206667 +0000 UTC
Timestamp: 2024-01-23 11:29:43.724431557 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2024-01-23T11:29:43.724431557Z stderr F Timestamp: 2024-01-23 05:27:05.384365083 +0000 UTC)
Attributes:
     -> logtag: Str(F)
     -> time: Str(2024-01-23T11:29:43.724431557Z)
     -> log: Str(Timestamp: 2024-01-23 05:27:05.384365083 +0000 UTC)
     -> log.iostream: Str(stderr)
     -> log.file.path: Str(/var/log/pods/observability_otel-collector-agent-2fgpf_934a4080-87b4-4a83-a5d3-8d7a032a0fc3/opentelemetry-collector/0.log)

The log format is not syslog format, I guess it is the reason why syslog exporter didn't generate output. Are you sure it should work? When I saw the example of syslog exporter of github, the file from filelog was with syslog format and use syslog parser in operators. I didn't find any other example from google. If you have, please let me know.

@oppokui
Copy link
Author

oppokui commented Jan 30, 2024

@djaglowski , I tried your config, but failed.

I configure as follows:

data:
  relay: |
    exporters:
      debug: {}
      logging: {}
      syslog:
        endpoint: ${env:MY_POD_IP}
        network: udp
        port: 5140
        protocol: rfc5424
    extensions:
      health_check: {}
    processors:
      batch: {}
    receivers:
      syslog:
        udp:
          listen_address: "${env:MY_POD_IP}:5140"
        protocol: rfc5424
        location: UTC
      filelog:
        exclude:
        - /var/log/pods/observability_otel-collector*_*/opentelemetry-collector/*.log
        include:
        - /var/log/pods/*/*/*.log
        include_file_name: false
        include_file_path: true
        start_at: beginning
    service:
      extensions: {}
      pipelines:
        logs/in:
          exporters:
          - syslog
          receivers:
          - filelog
        logs/out:
          exporters:
          - debug
          receivers:
          - syslog
      telemetry:
        metrics:
          address: ${env:MY_POD_IP}:8888
kind: ConfigMap

I open port 5140 in collector daemonset yaml.
But the log sounds the syslog receiver didn't listen to 5140 port, so exporter failed to connect to the port.

2024-01-30T00:31:00.708Z	info	service@v0.91.0/telemetry.go:86	Setting up own telemetry...
2024-01-30T00:31:00.708Z	info	service@v0.91.0/telemetry.go:203	Serving Prometheus metrics	{"address": "10.3.15.68:8888", "level": "Basic"}
2024-01-30T00:31:00.709Z	info	syslogexporter@v0.91.0/exporter.go:42	Syslog Exporter configured	{"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "10.3.15.68", "protocol": "rfc5424", "port": 5140}
2024-01-30T00:31:00.709Z	info	exporter@v0.91.0/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-01-30T00:31:00.709Z	info	service@v0.91.0/service.go:145	Starting otelcol-contrib...	{"Version": "0.91.0", "NumCPU": 2}
2024-01-30T00:31:00.710Z	info	extensions/extensions.go:34	Starting extensions...
2024-01-30T00:31:00.710Z	info	adapter/receiver.go:45	Starting stanza receiver	{"kind": "receiver", "name": "filelog", "data_type": "logs"}
2024-01-30T00:31:00.710Z	info	adapter/receiver.go:45	Starting stanza receiver	{"kind": "receiver", "name": "syslog", "data_type": "logs"}
2024-01-30T00:31:00.710Z	info	service@v0.91.0/service.go:171	Everything is ready. Begin running and processing data.
2024-01-30T00:31:00.911Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/default_aiops-0_a5d0a17a-732d-4076-8ac0-a2551aa89a5b/aiops/0.log"}
2024-01-30T00:31:00.914Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "6.146951608s"}
2024-01-30T00:31:07.062Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "5.185220391s"}
2024-01-30T00:31:12.248Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "13.930141194s"}
2024-01-30T00:31:26.180Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "17.589354805s"}
2024-01-30T00:31:43.770Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "16.555660002s"}
2024-01-30T00:32:00.327Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "15.470073764s"}
2024-01-30T00:32:15.797Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "21.955040327s"}
2024-01-30T00:32:37.753Z	info	exporterhelper/retry_sender.go:154	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "syslog", "error": "dial tcp 10.3.15.68:5140: connect: connection refused", "interval": "40.000188183s"}

@djaglowski
Copy link
Member

The log format is not syslog format, I guess it is the reason why syslog exporter didn't generate output. Are you sure it should work?

The collector has an internal representation of logs. Receivers convert other formats into the internal representation, and exporters convert from the internal representation into other formats. So you should be able to use the syslog exporter with any logs that are successfully received by the collector.

@oppokui
Copy link
Author

oppokui commented Jan 31, 2024

@djaglowski , thanks for the confirmation.

I didn't see any log printed for syslog exporter. The log only print one .log file from filelog receiver, then stuck there. I assume it should find all k8s logs in current EC2 node, but it didn't.

$ kubectl logs -n observability otel-collector-agent-9xpfp -f
2024-01-31T09:40:20.573Z	info	service@v0.91.0/telemetry.go:86	Setting up own telemetry...
2024-01-31T09:40:20.574Z	info	service@v0.91.0/telemetry.go:203	Serving Prometheus metrics	{"address": "10.3.1.18:8888", "level": "Basic"}
2024-01-31T09:40:20.574Z	info	exporter@v0.91.0/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-01-31T09:40:20.574Z	info	syslogexporter@v0.91.0/exporter.go:42	Syslog Exporter configured	{"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "23.21.46.218", "protocol": "rfc5424", "port": 5140}
2024-01-31T09:40:20.575Z	info	service@v0.91.0/service.go:145	Starting otelcol-contrib...	{"Version": "0.91.0", "NumCPU": 2}
2024-01-31T09:40:20.575Z	info	extensions/extensions.go:34	Starting extensions...
2024-01-31T09:40:20.575Z	info	adapter/receiver.go:45	Starting stanza receiver	{"kind": "receiver", "name": "filelog", "data_type": "logs"}
2024-01-31T09:40:20.576Z	info	service@v0.91.0/service.go:171	Everything is ready. Begin running and processing data.
2024-01-31T09:40:20.777Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/example-hotrod_hotrod-customer-776c9c447f-xzcpz_0b9e34bf-69ac-44e7-92c4-a236ff365012/hotrod-customer/0.log"}

By the way, in original example of syslog https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/syslogexporter/examples/config_with_filelog_receiver.yaml, I saw extension "file_storage/syslog", I comment it out because it didn't work in my env. It is the possible reason that syslog didn't have tmp folder to generate output?

The file_storage extension need a hostpath, but it always reports permission issue. How to resolve it?

@oppokui
Copy link
Author

oppokui commented Feb 1, 2024

@djaglowski , file_storage extension is not a problem now. I configure initContainer to create it. But still no syslog event sent out.

Here is the complete deployment yaml.

The config file:

apiVersion: v1
data:
  relay: |
    exporters:
      debug: {}
      logging: {}
      otlp:
        endpoint: jaeger.example-hotrod:4317
        tls:
          insecure: true
      syslog:
        endpoint: 23.21.46.218
        network: udp
        port: 5140
        protocol: rfc5424
        retry_on_failure:
          enabled: true
          initial_interval: 10s
          max_elapsed_time: 150s
          max_interval: 40s
        sending_queue:
          enabled: true
          num_consumers: 20
          queue_size: 10000
          storage: file_storage/syslog
        timeout: 1s
        tls: {}
    extensions:
      file_storage/syslog:
        directory: /var/lib/storage/otc/
        timeout: 10s
      health_check: {}
    processors:
      batch: {}
      k8sattributes:
        extract:
          labels:
          - from: pod
            key_regex: (.*)
            tag_name: $$1
          metadata:
          - k8s.namespace.name
          - k8s.deployment.name
          - k8s.statefulset.name
          - k8s.daemonset.name
          - k8s.cronjob.name
          - k8s.job.name
          - k8s.node.name
          - k8s.pod.name
          - k8s.pod.uid
          - k8s.pod.start_time
        filter:
          node_from_env_var: K8S_NODE_NAME
        passthrough: false
        pod_association:
        - sources:
          - from: resource_attribute
            name: k8s.pod.ip
        - sources:
          - from: resource_attribute
            name: k8s.pod.uid
        - sources:
          - from: connection
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 25
    receivers:
      filelog:
        exclude:
        - /var/log/pods/observability_otel-collector*_*/opentelemetry-collector/*.log
        include:
        - /var/log/pods/*/*/*.log
        include_file_name: false
        include_file_path: true
        operators:
        - id: get-format
          routes:
          - expr: body matches "^\\{"
            output: parser-docker
          - expr: body matches "^[^ Z]+ "
            output: parser-crio
          - expr: body matches "^[^ Z]+Z"
            output: parser-containerd
          type: router
        - id: parser-crio
          output: extract_metadata_from_filepath
          regex: ^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
          timestamp:
            layout: 2006-01-02T15:04:05.999999999Z07:00
            layout_type: gotime
            parse_from: attributes.time
          type: regex_parser
        - id: parser-containerd
          output: extract_metadata_from_filepath
          regex: ^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
          timestamp:
            layout: '%Y-%m-%dT%H:%M:%S.%LZ'
            parse_from: attributes.time
          type: regex_parser
        - id: parser-docker
          output: extract_metadata_from_filepath
          timestamp:
            layout: '%Y-%m-%dT%H:%M:%S.%LZ'
            parse_from: attributes.time
          type: json_parser
        - from: attributes.log
          to: body
          type: move
        - cache:
            size: 128
          id: extract_metadata_from_filepath
          parse_from: attributes["log.file.path"]
          regex: ^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]{36})\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log$
          type: regex_parser
        - from: attributes.stream
          to: attributes["log.iostream"]
          type: move
        - from: attributes.container_name
          to: resource["k8s.container.name"]
          type: move
        - from: attributes.namespace
          to: resource["k8s.namespace.name"]
          type: move
        - from: attributes.pod_name
          to: resource["k8s.pod.name"]
          type: move
        - from: attributes.restart_count
          to: resource["k8s.container.restart_count"]
          type: move
        - from: attributes.uid
          to: resource["k8s.pod.uid"]
          type: move
        start_at: beginning
      hostmetrics:
        collection_interval: 10s
        root_path: /hostfs
        scrapers:
          cpu: null
          disk: null
          filesystem:
            exclude_fs_types:
              fs_types:
              - autofs
              - binfmt_misc
              - bpf
              - cgroup2
              - configfs
              - debugfs
              - devpts
              - devtmpfs
              - fusectl
              - hugetlbfs
              - iso9660
              - mqueue
              - nsfs
              - overlay
              - proc
              - procfs
              - pstore
              - rpc_pipefs
              - securityfs
              - selinuxfs
              - squashfs
              - sysfs
              - tracefs
              match_type: strict
            exclude_mount_points:
              match_type: regexp
              mount_points:
              - /dev/*
              - /proc/*
              - /sys/*
              - /run/k3s/containerd/*
              - /var/lib/docker/*
              - /var/lib/kubelet/*
              - /snap/*
          load: null
          memory: null
          network: null
      jaeger:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:14250
          thrift_compact:
            endpoint: ${env:MY_POD_IP}:6831
          thrift_http:
            endpoint: ${env:MY_POD_IP}:14268
      kubeletstats:
        auth_type: serviceAccount
        collection_interval: 20s
        endpoint: ${env:K8S_NODE_NAME}:10250
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
      prometheus:
        config:
          scrape_configs:
          - job_name: opentelemetry-collector
            scrape_interval: 10s
            static_configs:
            - targets:
              - ${env:MY_POD_IP}:8888
      zipkin:
        endpoint: ${env:MY_POD_IP}:9411
    service:
      extensions:
      - file_storage/syslog
      pipelines:
        logs:
          exporters:
          - syslog
          - debug
          processors:
          - k8sattributes
          - memory_limiter
          - batch
          receivers:
          - filelog
      telemetry:
        metrics:
          address: ${env:MY_POD_IP}:8888
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: otel
    meta.helm.sh/release-namespace: observability
  creationTimestamp: "2024-01-23T12:14:31Z"
  labels:
    app.kubernetes.io/instance: otel
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: collector
    app.kubernetes.io/version: 0.91.0
    helm.sh/chart: opentelemetry-collector-0.77.0
  name: otel-collector-agent
  namespace: observability
  resourceVersion: "11580430"
  uid: 664472bb-263e-4b92-8620-03255c602bca

The daemonset of OTLP collector:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/instance: otel
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: collector
    app.kubernetes.io/version: 0.91.0
    helm.sh/chart: opentelemetry-collector-0.77.0
  name: otel-collector-agent
  namespace: observability
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: otel
      app.kubernetes.io/name: collector
      component: agent-collector
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: otel
        app.kubernetes.io/name: collector
        component: agent-collector
    spec:
      containers:
      - command:
        - /otelcol-contrib
        - --config=/conf/relay.yaml
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: otel/opentelemetry-collector-contrib:0.91.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /metrics
            port: 8888
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: opentelemetry-collector
        ports:
        - containerPort: 15140
          name: syslog
          protocol: UDP
        - containerPort: 6831
          hostPort: 6831
          name: jaeger-compact
          protocol: UDP
        - containerPort: 14250
          hostPort: 14250
          name: jaeger-grpc
          protocol: TCP
        - containerPort: 14268
          hostPort: 14268
          name: jaeger-thrift
          protocol: TCP
        - containerPort: 4317
          hostPort: 4317
          name: otlp
          protocol: TCP
        - containerPort: 4318
          hostPort: 4318
          name: otlp-http
          protocol: TCP
        - containerPort: 9411
          hostPort: 9411
          name: zipkin
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /metrics
            port: 8888
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /conf
          name: opentelemetry-collector-configmap
        - mountPath: /var/log/pods
          name: varlogpods
          readOnly: true
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /hostfs
          mountPropagation: HostToContainer
          name: hostfs
          readOnly: true
        - mountPath: /var/lib/storage/otc/
          name: opentelemetry-collector-data
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - 'chown -R 10001: /var/lib/storage/otc'
        image: busybox:latest
        imagePullPolicy: Always
        name: init-fs
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/storage/otc
          name: opentelemetry-collector-data
      nodeSelector:
        alpha.eksctl.io/nodegroup-name: worker
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: otel-collector
      serviceAccountName: otel-collector
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: relay
            path: relay.yaml
          name: otel-collector-agent
        name: opentelemetry-collector-configmap
      - hostPath:
          path: /var/log/pods
          type: ""
        name: varlogpods
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - hostPath:
          path: /
          type: ""
        name: hostfs
      - hostPath:
          path: /var/lib/storage/otc/
          type: ""
        name: opentelemetry-collector-data
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate

When it is deployed, the log from debug exporter is printed:

2024-01-31T23:48:50.624Z	info	service@v0.91.0/telemetry.go:86	Setting up own telemetry...
2024-01-31T23:48:50.624Z	info	service@v0.91.0/telemetry.go:203	Serving Prometheus metrics	{"address": "10.3.1.18:8888", "level": "Basic"}
2024-01-31T23:48:50.624Z	info	exporter@v0.91.0/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-01-31T23:48:50.625Z	info	syslogexporter@v0.91.0/exporter.go:42	Syslog Exporter configured	{"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "23.21.46.218", "protocol": "rfc5424", "port": 5140}
2024-01-31T23:48:50.627Z	info	memorylimiterprocessor@v0.91.0/memorylimiter.go:138	Using percentage memory limiter	{"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "total_memory_mib": 3907, "limit_percentage": 80, "spike_limit_percentage": 25}
2024-01-31T23:48:50.627Z	info	memorylimiterprocessor@v0.91.0/memorylimiter.go:102	Memory limiter configured	{"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "limit_mib": 3126, "spike_limit_mib": 976, "check_interval": 5}
2024-01-31T23:48:50.627Z	info	kube/client.go:113	k8s filtering	{"kind": "processor", "name": "k8sattributes", "pipeline": "logs", "labelSelector": "", "fieldSelector": "spec.nodeName=ip-10-3-21-159.ec2.internal"}
2024-01-31T23:48:50.628Z	info	service@v0.91.0/service.go:145	Starting otelcol-contrib...	{"Version": "0.91.0", "NumCPU": 2}
2024-01-31T23:48:50.628Z	info	extensions/extensions.go:34	Starting extensions...
2024-01-31T23:48:50.628Z	info	extensions/extensions.go:37	Extension is starting...	{"kind": "extension", "name": "file_storage/syslog"}
2024-01-31T23:48:50.629Z	info	extensions/extensions.go:52	Extension started.	{"kind": "extension", "name": "file_storage/syslog"}
2024-01-31T23:48:50.638Z	info	internal/persistent_queue.go:319	Fetching items left for dispatch by consumers	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 20}
2024-01-31T23:48:51.383Z	info	internal/persistent_queue.go:361	Moved items for dispatching back to queue	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 20}
2024-01-31T23:48:51.383Z	info	adapter/receiver.go:45	Starting stanza receiver	{"kind": "receiver", "name": "filelog", "data_type": "logs"}
2024-01-31T23:48:51.384Z	info	service@v0.91.0/service.go:171	Everything is ready. Begin running and processing data.
2024-01-31T23:48:51.586Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/example-hotrod_hotrod-customer-746549d779-69wld_4755e27a-7b2a-4069-b521-d167c02bca70/hotrod-customer/0.log"}
2024-01-31T23:48:51.645Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 121, "log records": 5700}
2024-01-31T23:48:51.735Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 162, "log records": 8201}
2024-01-31T23:48:52.895Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 159, "log records": 8200}
2024-01-31T23:48:53.131Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 121, "log records": 6094}
2024-01-31T23:48:56.399Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-01-31T23:48:56.812Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 13}
2024-01-31T23:48:59.423Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-01-31T23:48:59.624Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 3}
2024-01-31T23:49:01.229Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-01-31T23:49:01.631Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 16}
2024-01-31T23:49:05.049Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-01-31T23:49:05.451Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 9}

In target machine, I watch 5140 syslog server port, no data printed:

sudo tcpdump -i ens5 udp port 5140 -XAvvv

But if I manually send UDP data in the same machine of OTLP collector, the message will be printed in target machine.

python -c "import socket ; s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM); s.sendto('hello I am client',('23.21.46.218', 5140));"

The message:

[vitria@ip-10-0-2-189 ~]$ sudo tcpdump -i ens5 udp port 5140 -XAvvv
tcpdump: listening on ens5, link-type EN10MB (Ethernet), capture size 262144 bytes
23:53:36.417446 IP (tos 0x0, ttl 254, id 4298, offset 0, flags [DF], proto UDP (17), length 45)
    ec2-34-229-196-25.compute-1.amazonaws.com.27743 > ip-10-0-2-189.ec2.internal.5140: [udp sum ok] UDP, length 17
	0x0000:  4500 002d 10ca 4000 fe11 783a 22e5 c419  E..-..@...x:"...
	0x0010:  0a00 02bd 6c5f 1414 0019 36d2 6865 6c6c  ....l_....6.hell
	0x0020:  6f20 4920 616d 2063 6c69 656e 7400       o.I.am.client.

It means if syslog exporter send the UDP package out, the target should receive it.
Now target machine didn't receive any data, it means syslog exporter didn't send out data. Make sense?

How to print more logs of syslog exporter? Don't know the internal logic. Any idea how to debug it?

@oppokui
Copy link
Author

oppokui commented Feb 1, 2024

@sumo-drosiek can you take a look at this issue? It is a bug of syslog exporter?

I am using helm chart 0.77.0 which includes docker image otel/opentelemetry-collector-contrib:0.89.0.

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm install otel open-telemetry/opentelemetry-collector --version 0.77.0

I try 0.93.0 or dev build, none of them work.

@sumo-drosiek
Copy link
Member

@oppokui AFAIR Syslog exporter requires some data preparation. The following attributes should be set, so data can be properly serialized: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/syslogexporter#rfc5424

Could you use transformprocessor to set up attributes?

@sumo-drosiek
Copy link
Member

Also I would check metrics: curl localhost:8888/metrics | grep exporter as maybe there is some incompatibility for syslog format between exporter and server 🤔

@oppokui
Copy link
Author

oppokui commented Feb 1, 2024

@sumo-drosiek , great! it is the one exact problem I meet.

When I use source code to debug the same case, I can see the empty sylsog event in tcp mode:

<165>1 1970-01-01T00:00:00Z - - - - -\n
2024-02-01T20:10:36.765+0800	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 1}
2024-02-01T20:10:36.765+0800	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 1970-01-01T00:00:00Z - - - - -\n"}
2024-02-01T20:10:36.765+0800	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 1}
2024-02-01T20:10:36.765+0800	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 1970-01-01T00:00:00Z - - - - -\n"}

In this case, I read /var/log/syslog file which should be standard syslog format, but it can't parse the file.

  filelog/syslog:
    include:
    - /var/log/syslog
    include_file_name: false
    include_file_path: true
    operators:
      - type: syslog_parser
        protocol: rfc3164

The error log:

2024-02-01T20:28:41.153+0800	error	helper/transformer.go:98	Failed to process entry	{"kind": "receiver", "name": "filelog/syslog", "data_type": "logs", "operator_id": "syslog_parser", "operator_type": "syslog_parser", "error": "expecting a priority value within angle brackets [col 0]", "action": "send", "entry": {"observed_timestamp":"2024-02-01T20:28:41.153335673+08:00","timestamp":"0001-01-01T00:00:00Z","body":"Feb  1 20:28:41 oppokuit460p kernel: [120893.489447] audit: type=1400 audit(1706790520.996:131165): apparmor=\"DENIED\" operation=\"open\" class=\"file\" profile=\"snap.skype.skype\" name=\"/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/online\" pid=5004 comm=\"skypeforlinux\" requested_mask=\"r\" denied_mask=\"r\" fsuid=1000 ouid=0","attributes":{"log.file.path":"/var/log/syslog"},"severity":0,"scope_name":""}}
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError
	github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.93.0/operator/helper/transformer.go:98
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ParseWith
	github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.93.0/operator/helper/parser.go:140
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback
	github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.93.0/operator/helper/parser.go:112
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/syslog.(*Parser).Process

After I comment parser from operators, the error log disappear. But the syslog event body is empty.
It is not my primary case, as my target is to collect all k8s logs and send out through syslog UDP. The main gap is still the attribute enrichment. I think I can enrich the following attributes based on k8s label.

  • appname
  • hostname
  • message

@sumo-drosiek
Copy link
Member

@oppokui Could you try the following parser?

    operators:
      - type: syslog_parser
        protocol: rfc5424

I believe /var/log/syslog is not RFC3164

Regarding message, the following configuration should help:

transform:
  error_mode: ignore
  log_statements:
    - context: log
      statements:
        - set(attributes['message'], body)

@oppokui
Copy link
Author

oppokui commented Feb 1, 2024

@sumo-drosiek , I tried rfc5424, no help.

For the attribute enrichment, I saw filelog already configured some operators. So I just add appname/hostname/message, as follows.

        - from: attributes.log
          to: attributes.message
          type: copy
        - type: copy
          from: attributes.pod_name
          to: attributes["appname"]
        - type: copy
          from: resource["k8s.node.name"]
          to: attributes["hostname"]

But for "timeUnixNano", do you know how to enrich it from timestamp string format?

{
  "body": "",
  "timeUnixNano": 1065903255003000000,
  "attributes":
  {
    "appname": "su",
    "hostname": "mymachine.example.com",
    "message": "'su root' failed for lonvick on /dev/pts/8",
    "priority": 34,
  }
}

@sumo-drosiek
Copy link
Member

You probably looking for time parser operator or combination of regex_parser and time_parser

You can then use transformprocessor to move it from log property o log attribute. Is this what you are looking for?

@oppokui
Copy link
Author

oppokui commented Feb 1, 2024

I don't know "timeUnixNano" is required or not as filelog can identify the timestamp from the k8s log file. I assume syslog can identify the format too. No need to construct it manually, right?

I add 2 processors, k8sattribute and transform.

    processors:
      batch: {}
      k8sattributes:
        extract:
          labels:
          - from: pod
            key_regex: (.*)
            tag_name: $$1
          metadata:
          - k8s.namespace.name
          - k8s.deployment.name
          - k8s.statefulset.name
          - k8s.daemonset.name
          - k8s.cronjob.name
          - k8s.job.name
          - k8s.node.name
          - k8s.pod.name
          - k8s.pod.uid
          - k8s.pod.start_time
        filter:
          node_from_env_var: K8S_NODE_NAME
        passthrough: false
        pod_association:
        - sources:
          - from: resource_attribute
            name: k8s.pod.ip
        - sources:
          - from: resource_attribute
            name: k8s.pod.uid
        - sources:
          - from: connection
      memory_limiter:
        check_interval: 5s
        limit_percentage: 80
        spike_limit_percentage: 26
      transform:
        error_mode: ignore
        log_statements:
          - context: log
            statements:
              - set(attributes["message"], body)
          - context: log
            statements:
              - set(attributes["appname"], attributes["k8s.pod.name"])
          - context: log
            statements:
              - set(attributes["hostname"], attributes["k8s.node.name"])

    service:
      extensions:
      - file_storage/syslog
      pipelines:
        logs:
          exporters:
          - syslog
          - debug
          processors:
          - k8sattributes
          - transform
          - memory_limiter
          - batch
          receivers:
          - filelog

The transform will enrich appname, hostname and message, not sure k8s.pod.name/k8s.node.name are in the attributes after k8s attribute processor. Do you know it?

After it, I still didn't see a complete syslog event send out.

@sumo-drosiek
Copy link
Member

After it, I still didn't see a complete syslog event send out.

What is missing? Most of the fields are optional, so the question if there are required by you/syslog server or rfc?

If you want to send additional attributes (no specified explicitly in rfc) you need add them to structured_data map. link to list of fields

Could you share, example input, actual output of it and expected output? It would be easier for me to understand the context. Ideally outputs should be both from syslog server and debug(logging) exporter

@oppokui
Copy link
Author

oppokui commented Feb 1, 2024

I am not sure which part is missing. Originally I assume the format is not complete, but as you said, event will be still sent out even attribute is missing.
I also wonder the rsyslog didn't send out the UDP package.

The end-2-end data as follows:

  1. The filelog will watch k8s pod logs, by /var/log/pods///*.log.
    Here is one piece of logs:
sudo cat /var/log/pods/example-hotrod_hotrod-customer-746549d779-69wld_4755e27a-7b2a-4069-b521-d167c02bca70/hotrod-customer/0.log

2024-02-01T07:18:12.459587145Z stderr F 2024-02-01T07:18:12.459Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "f83ef836122def2ac696d82e2c43c7e5", "span_id": "8df621220d58fed3", "customer_id": 731}
2024-02-01T07:18:12.459652037Z stderr F 2024-02-01T07:18:12.459Z	INFO	tracing/mutex.go:68	Acquired lock; 0 transactions waiting behind	{"service": "customer", "component": "mysql", "trace_id": "f83ef836122def2ac696d82e2c43c7e5", "span_id": "3eaccffddb4e4bb0", "waiters": "[]"}
2024-02-01T07:18:15.402164885Z stderr F 2024-02-01T07:18:15.401Z	INFO	customer/server.go:78	HTTP request received	{"service": "customer", "trace_id": "84d39b534076de263e2859210229ec6c", "span_id": "7d9a0716323ddc0b", "method": "GET", "url": "/customer?customer=123"}
2024-02-01T07:18:15.40217838Z stderr F 2024-02-01T07:18:15.402Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "84d39b534076de263e2859210229ec6c", "span_id": "7d9a0716323ddc0b", "customer_id": 123}
2024-02-01T07:18:15.402181614Z stderr F 2024-02-01T07:18:15.402Z	INFO	tracing/mutex.go:68	Acquired lock; 0 transactions waiting behind	{"service": "customer", "component": "mysql", "trace_id": "84d39b534076de263e2859210229ec6c", "span_id": "4e3a4a90463ca0d0", "waiters": "[]"}
2024-02-01T07:18:17.272851185Z stderr F 2024-02-01T07:18:17.272Z	INFO	customer/server.go:78	HTTP request received	{"service": "customer", "trace_id": "42ef7e686adf88f1b9d75d3cc5952dac", "span_id": "546b907dde726bba", "method": "GET", "url": "/customer?customer=567"}
2024-02-01T07:18:17.272873217Z stderr F 2024-02-01T07:18:17.272Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "42ef7e686adf88f1b9d75d3cc5952dac", "span_id": "546b907dde726bba", "customer_id": 567}
2024-02-01T07:18:17.272876857Z stderr F 2024-02-01T07:18:17.272Z	INFO	tracing/mutex.go:68	Acquired lock; 0 transactions waiting behind	{"service": "customer", "component": "mysql", "trace_id": "42ef7e686adf88f1b9d75d3cc5952dac", "span_id": "83dbc5ec9f6f8532", "waiters": "[]"}
2024-02-01T07:18:21.113233332Z stderr F 2024-02-01T07:18:21.113Z	INFO	customer/server.go:78	HTTP request received	{"service": "customer", "trace_id": "3c25a20cc361d44261816f6d5f3cbef6", "span_id": "ea1bb8c4759c9d12", "method": "GET", "url": "/customer?customer=731"}
2024-02-01T07:18:21.113248788Z stderr F 2024-02-01T07:18:21.113Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "3c25a20cc361d44261816f6d5f3cbef6", "span_id": "ea1bb8c4759c9d12", "customer_id": 731}
  1. debug exporter print the log entry count:
2024-02-01T15:12:40.506Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/example-hotrod_hotrod-customer-746549d779-69wld_4755e27a-7b2a-4069-b521-d167c02bca70/hotrod-customer/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/example-hotrod_hotrod-driver-5cf45c4968-6t5xm_8985436d-bd09-4e6a-9d54-27f9b2da2306/hotrod-driver/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/example-hotrod_hotrod-frontend-57646dc7db-jfmpd_1319ff23-b3a4-45c7-b816-95592dd49267/hotrod-frontend/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/example-hotrod_hotrod-route-676f964757-8549f_0cb06278-2798-4249-a605-6a96d1b8485f/hotrod-route/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_aws-node-vjljm_da4babe8-2738-49b2-84e7-60e95a2d547d/aws-eks-nodeagent/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_aws-node-vjljm_da4babe8-2738-49b2-84e7-60e95a2d547d/aws-node/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_aws-node-vjljm_da4babe8-2738-49b2-84e7-60e95a2d547d/aws-vpc-cni-init/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_coredns-c48b6f766-ghnlj_7974f513-dfb7-480b-b1ef-4dc7d495d29b/coredns/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_ebs-csi-node-2vsqs_ca32605a-286b-4828-8fe1-d5251f2cb193/ebs-plugin/0.log"}
2024-02-01T15:12:40.507Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_ebs-csi-node-2vsqs_ca32605a-286b-4828-8fe1-d5251f2cb193/liveness-probe/0.log"}
2024-02-01T15:12:41.107Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 4}
2024-02-01T15:12:42.715Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-02-01T15:12:42.916Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 4}
2024-02-01T15:12:45.527Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-02-01T15:12:45.929Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 12}
2024-02-01T15:12:48.539Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-02-01T15:12:48.740Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 4}
2024-02-01T15:12:51.351Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-02-01T15:12:51.753Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 16}
  1. I saw syslog queue file, which should be the final syslog event to be sent. I can see appname and message, but no hostname or timeUnixNano.
    Data is in the queue, whether it is sent out or not, not sure.
logtag
F2�
log�
�2024-02-01T15:17:09.105Z	INFO	frontend/server.go:107	HTTP request received	{"service": "frontend", "trace_id": "69e2fd0ac6038bcbb506b92d7b348f6f", "span_id": "9ae516b3e3abb303", "method": "GET", "url": "/dispatch?nonse=0.07647454267267473&customer=392"}2

log.iostream
stderr2-
appname"
 hotrod-frontend-57646dc7db-jfmpd2�
log.file.pathz
x/var/log/pods/example-hotrod_hotrod-frontend-57646dc7db-jfmpd_1319ff23-b3a4-45c7-b816-95592dd49267/hotrod-frontend/0.log2(
time 
2024-02-01T15:17:09.106241978Z2�
message�
�2024-02-01T15:17:09.106241978Z stderr F 2024-02-01T15:17:09.105Z	INFO	frontend/server.go:107	HTTP request received	{"service": "frontend", "trace_id": "69e2fd0ac6038bcbb506b92d7b348f6f", "span_id": "9ae516b3e3abb303", "method": "GET", "url": "/dispatch?nonse=0.07647454267267473&customer=392"}JRY�0!Ư�      ��J!Ư*�
�2024-02-01T15:17:09.106270954Z stderr F 2024-02-01T15:17:09.105Z	INFO	customer/client.go:47	Getting customer	{"service": "frontend", "component": "customer_client", "trace_id": "69e2fd0ac6038bcbb506b92d7b348f6f", "span_id": "9ae516b3e3abb303", "customer_id": 392}2

log.iostream
stderr2�
log.file.pathz
x/var/log/pods/example-hotrod_hotrod-frontend-57646dc7db-jfmpd_1319ff23-b3a4-45c7-b816-95592dd49267/hotrod-frontend/0.log2
logtag
F2�
log�
�2024-02-01T15:17:09.105Z	INFO	customer/client.go:47	Getting customer	{"service": "frontend", "component": "customer_client", "trace_id": "69e2fd0ac6038bcbb506b92d7b348f6f", "span_id": "9ae516b3e3abb303", "customer_id": 392}2-
appname"
 hotrod-frontend-57646dc7db-jfmpd2(
time 
2024-02-01T15:17:09.106270954Z2�
message�
�2024-02-01T15:17:09.106270954Z stderr F 2024-02-01T15:17:09.105Z	INFO	customer/client.go:47	Getting customer	{"service": "frontend", "component": "customer_client", "trace_id": "69e2fd0ac6038bcbb506b92d7b348f6f", "span_id": "9ae516b3e3abb303", "customer_id": 392}JRY�]1!Ư

  1. In the target machine, I watch the UDP port, no data is received.
[vitria@ip-10-0-2-189 ~]$ sudo tcpdump -i any udp port 5140 -X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
  1. How to prove syslog exporter is not sending the data?

When I sent UDP package in the same machine of OTLP collector, I can see the data in target machine.

echo -n "hello world" | nc -u -w1 23.21.46.218 5140
[vitria@ip-10-0-2-189 ~]$ sudo tcpdump -i any udp port 5140 -X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
15:31:36.954780 IP ec2-34-229-196-25.compute-1.amazonaws.com.22929 > ip-10-0-2-189.ec2.internal.5140: UDP, length 11
	0x0000:  4500 0027 4bd3 4000 fe11 3d37 22e5 c419  E..'K.@...=7"...
	0x0010:  0a00 02bd 5991 1414 0013 0c99 6865 6c6c  ....Y.......hell
	0x0020:  6f20 776f 726c 6400 0000 0000 0000       o.world.......

In my local env, I can modify the .go code and print log to indicate whether the connect() and sender.write() are executed or not, but I fail to deploy it to EKS env. So don't know whether the UDP connect/send work or not.

opentelemetry-collector-contrib/exporter/syslogexporter/exporter.go

Any idea how to diagnose it?

@oppokui
Copy link
Author

oppokui commented Feb 2, 2024

@sumo-drosiek , I checkout collector-contrib code in my EC2 machine, and modify exporter.go to print more logs, then create same pipeline with my description above, from k8s logs to syslog exporter.
I found the code "sender.Write(payload.String())" was not executed, the code sounds hang in "connect()" of TCP/UDP port.

My code change is:

diff --git a/exporter/syslogexporter/exporter.go b/exporter/syslogexporter/exporter.go
index 94b8508b2f..3b343ac90a 100644
--- a/exporter/syslogexporter/exporter.go
+++ b/exporter/syslogexporter/exporter.go
@@ -39,7 +39,7 @@ func initExporter(cfg *Config, createSettings exporter.CreateSettings) (*sysloge
                formatter: createFormatter(cfg.Protocol),
        }
 
-       s.logger.Info("Syslog Exporter configured",
+       s.logger.Info("Syslog Exporter configured by Kui",
                zap.String("endpoint", cfg.Endpoint),
                zap.String("protocol", cfg.Protocol),
                zap.Int("port", cfg.Port),
@@ -57,7 +57,7 @@ func newLogsExporter(
        if err != nil {
                return nil, fmt.Errorf("failed to initialize the logs exporter: %w", err)
        }
-
+       s.logger.Info("[Kui] newLogsExporter ")
        return exporterhelper.NewLogsExporter(
                ctx,
                params,
@@ -81,6 +81,9 @@ func (se *syslogexporter) pushLogsData(_ context.Context, logs plog.Logs) error
 }
 
 func (se *syslogexporter) exportBatch(logs plog.Logs) error {
+       se.logger.Info("[Kui] exportBatch",
+               zap.Int("logs", logs.ResourceLogs().Len()),
+       )
        var payload strings.Builder
        for i := 0; i < logs.ResourceLogs().Len(); i++ {
                resourceLogs := logs.ResourceLogs().At(i)
@@ -93,39 +96,62 @@ func (se *syslogexporter) exportBatch(logs plog.Logs) error {
                        }
                }
        }
-
+       se.logger.Info("[Kui] format data: ",
+               zap.String("data", payload.String()),
+       )
        if payload.Len() > 0 {
                sender, err := connect(se.logger, se.config, se.tlsConfig)
                if err != nil {
                        return consumererror.NewLogs(err, logs)
                }
                defer sender.close()
+               se.logger.Info("[Kui] connected syslog")
                err = sender.Write(payload.String())
                if err != nil {
                        return consumererror.NewLogs(err, logs)
                }
+       } else {
+               se.logger.Info("[Kui] No syslog data from plog.Logs",
+                       zap.Int("port", logs.ResourceLogs().Len()),
+               )
+
        }
        return nil
 }
 
 func (se *syslogexporter) exportNonBatch(logs plog.Logs) error {
+
+       se.logger.Info("[Kui] exportNonBatch",
+               zap.Int("logs", logs.ResourceLogs().Len()),
+       )
+
        sender, err := connect(se.logger, se.config, se.tlsConfig)
        if err != nil {
                return consumererror.NewLogs(err, logs)
        }
        defer sender.close()
 
+       se.logger.Info("[Kui] connected syslog")
        errs := []error{}
        droppedLogs := plog.NewLogs()
        for i := 0; i < logs.ResourceLogs().Len(); i++ {
                resourceLogs := logs.ResourceLogs().At(i)
                droppedResourceLogs := droppedLogs.ResourceLogs().AppendEmpty()
+               se.logger.Info("[Kui] logs.ResourceLogs()",
+                       zap.Int("ScopeLogs", resourceLogs.ScopeLogs().Len()),
+               )
                for j := 0; j < resourceLogs.ScopeLogs().Len(); j++ {
                        scopeLogs := resourceLogs.ScopeLogs().At(j)
                        droppedScopeLogs := droppedResourceLogs.ScopeLogs().AppendEmpty()
+                       se.logger.Info("[Kui] scopeLogs.LogRecords()",
+                               zap.Int("LogRecords", scopeLogs.LogRecords().Len()),
+                       )
                        for k := 0; k < scopeLogs.LogRecords().Len(); k++ {
                                logRecord := scopeLogs.LogRecords().At(k)
                                formatted := se.formatter.format(logRecord)
+                               se.logger.Info("[Kui] sender.write() ",
+                                       zap.String("data", formatted),
+                               )
                                err = sender.Write(formatted)
                                if err != nil {
                                        errs = append(errs, err)

The log message only print "[Kui] format data" logs before connect() line. There are lot of new lines in k8s pods, but no messages in collector log, it sounds hang there.

[ec2-user@ip-10-3-21-159 opentelemetry-collector-contrib]$ sudo ./bin/otelcontribcol_linux_amd64 --config examples/config.yaml
2024-02-02T02:57:53.868Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/telemetry.go:59	Setting up own telemetry...
2024-02-02T02:57:53.868Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/telemetry.go:104	Serving metrics	{"address": "0.0.0.0:8888", "level": "Basic"}
2024-02-02T02:57:53.868Z	info	exporter@v0.93.1-0.20240130182548-89388addcc7f/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-02-02T02:57:53.869Z	info	syslogexporter@v0.93.0/exporter.go:42	Syslog Exporter configured by Kui	{"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "23.21.46.218", "protocol": "rfc5424", "port": 5140}
2024-02-02T02:57:53.869Z	info	syslogexporter@v0.93.0/exporter.go:60	[Kui] newLogsExporter 	{"kind": "exporter", "data_type": "logs", "name": "syslog"}
2024-02-02T02:57:53.870Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/service.go:140	Starting otelcontribcol...	{"Version": "0.93.0-dev", "NumCPU": 2}
2024-02-02T02:57:53.870Z	info	extensions/extensions.go:34	Starting extensions...
2024-02-02T02:57:53.870Z	info	extensions/extensions.go:37	Extension is starting...	{"kind": "extension", "name": "file_storage/syslog"}
2024-02-02T02:57:53.870Z	info	extensions/extensions.go:52	Extension started.	{"kind": "extension", "name": "file_storage/syslog"}
2024-02-02T02:57:53.903Z	info	internal/persistent_queue.go:420	Fetching items left for dispatch by consumers	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 200}
2024-02-02T02:57:54.089Z	info	internal/persistent_queue.go:462	Moved items for dispatching back to queue	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 200}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:39.594745976Z - - - - -\n<165>1 2024-02-02T01:39:39.594757717Z - - - - -\n<165>1 2024-02-02T01:39:39.594760936Z - - - - -\n<165>1 2024-02-02T01:39:39.594390233Z - - - - -\n<165>1 2024-02-02T01:39:39.594483407Z - - - - -\n"}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:39.884735658Z - - - - -\n<165>1 2024-02-02T01:39:39.884762857Z - - - - -\n<165>1 2024-02-02T01:39:39.885160112Z - - - - -\n<165>1 2024-02-02T01:39:39.917530049Z - - - - -\n<165>1 2024-02-02T01:39:39.928865482Z - - - - -\n<165>1 2024-02-02T01:39:39.941329178Z - - - - -\n<165>1 2024-02-02T01:39:39.971081587Z - - - - -\n<165>1 2024-02-02T01:39:39.971107397Z - - - - -\n<165>1 2024-02-02T01:39:39.981535415Z - - - - -\n<165>1 2024-02-02T01:39:39.993996113Z - - - - -\n<165>1 2024-02-02T01:39:40.006378368Z - - - - -\n<165>1 2024-02-02T01:39:40.019767495Z - - - - -\n<165>1 2024-02-02T01:39:40.051405226Z - - - - -\n<165>1 2024-02-02T01:39:40.051447865Z - - - - -\n"}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 1}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:40.282731992Z - - - - -\n<165>1 2024-02-02T01:39:40.282756812Z - - - - -\n"}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 3}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:40.058712901Z - - - - -\n<165>1 2024-02-02T01:39:40.065091913Z - - - - -\n<165>1 2024-02-02T01:39:40.076373942Z - - - - -\n<165>1 2024-02-02T01:39:40.086747882Z - - - - -\n<165>1 2024-02-02T01:39:40.086773872Z - - - - -\n<165>1 2024-02-02T01:39:40.088003164Z - - - - -\n<165>1 2024-02-02T01:39:40.088234871Z - - - - -\n<165>1 2024-02-02T01:39:40.0898575Z - - - - -\n<165>1 2024-02-02T01:39:40.127847443Z - - - - -\n<165>1 2024-02-02T01:39:40.1388336Z - - - - -\n<165>1 2024-02-02T01:39:40.143371315Z - - - - -\n<165>1 2024-02-02T01:39:40.172324328Z - - - - -\n<165>1 2024-02-02T01:39:40.1841858Z - - - - -\n<165>1 2024-02-02T01:39:40.184576216Z - - - - -\n<165>1 2024-02-02T01:39:40.224665752Z - - - - -\n<165>1 2024-02-02T01:39:40.087138009Z - - - - -\n<165>1 2024-02-02T01:39:40.087305707Z - - - - -\n<165>1 2024-02-02T01:39:40.087561959Z - - - - -\n<165>1 2024-02-02T01:39:40.08774728Z - - - - -\n<165>1 2024-02-02T01:39:40.127565769Z - - - - -\n<165>1 2024-02-02T01:39:40.138467181Z - - - - -\n<165>1 2024-02-02T01:39:40.143093854Z - - - - -\n<165>1 2024-02-02T01:39:40.17195708Z - - - - -\n<165>1 2024-02-02T01:39:40.18388312Z - - - - -\n<165>1 2024-02-02T01:39:40.18404603Z - - - - -\n<165>1 2024-02-02T01:39:40.224332801Z - - - - -\n"}
2024-02-02T02:57:54.090Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:43.43463509Z - - - - -\n<165>1 2024-02-02T01:39:43.434646872Z - - - - -\n<165>1 2024-02-02T01:39:43.434649577Z - - - - -\n<165>1 2024-02-02T01:39:43.434233175Z - - - - -\n<165>1 2024-02-02T01:39:43.434260471Z - - - - -\n"}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:43.677866379Z - - - - -\n<165>1 2024-02-02T01:39:43.702451234Z - - - - -\n<165>1 2024-02-02T01:39:43.732252691Z - - - - -\n<165>1 2024-02-02T01:39:43.732278786Z - - - - -\n<165>1 2024-02-02T01:39:43.740693024Z - - - - -\n<165>1 2024-02-02T01:39:43.749183486Z - - - - -\n<165>1 2024-02-02T01:39:43.762547535Z - - - - -\n<165>1 2024-02-02T01:39:43.77187666Z - - - - -\n<165>1 2024-02-02T01:39:43.802630248Z - - - - -\n<165>1 2024-02-02T01:39:43.802672212Z - - - - -\n<165>1 2024-02-02T01:39:43.817015591Z - - - - -\n<165>1 2024-02-02T01:39:43.821369299Z - - - - -\n<165>1 2024-02-02T01:39:43.83177928Z - - - - -\n<165>1 2024-02-02T01:39:43.844070815Z - - - - -\n<165>1 2024-02-02T01:39:43.677466581Z - - - - -\n<165>1 2024-02-02T01:39:43.677493644Z - - - - -\n"}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 3}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:43.877923149Z - - - - -\n<165>1 2024-02-02T01:39:43.877947178Z - - - - -\n<165>1 2024-02-02T01:39:43.890229409Z - - - - -\n<165>1 2024-02-02T01:39:43.9036618Z - - - - -\n<165>1 2024-02-02T01:39:43.903784108Z - - - - -\n<165>1 2024-02-02T01:39:43.905098267Z - - - - -\n<165>1 2024-02-02T01:39:43.905298765Z - - - - -\n<165>1 2024-02-02T01:39:43.907373843Z - - - - -\n<165>1 2024-02-02T01:39:43.925274952Z - - - - -\n<165>1 2024-02-02T01:39:43.947157893Z - - - - -\n<165>1 2024-02-02T01:39:43.970903784Z - - - - -\n<165>1 2024-02-02T01:39:43.985897954Z - - - - -\n<165>1 2024-02-02T01:39:44.0059326Z - - - - -\n<165>1 2024-02-02T01:39:44.021923225Z - - - - -\n<165>1 2024-02-02T01:39:44.028624141Z - - - - -\n<165>1 2024-02-02T01:39:43.90428737Z - - - - -\n<165>1 2024-02-02T01:39:43.904424298Z - - - - -\n<165>1 2024-02-02T01:39:43.904623626Z - - - - -\n<165>1 2024-02-02T01:39:43.904795208Z - - - - -\n<165>1 2024-02-02T01:39:43.924925985Z - - - - -\n<165>1 2024-02-02T01:39:43.946807058Z - - - - -\n<165>1 2024-02-02T01:39:43.970622511Z - - - - -\n<165>1 2024-02-02T01:39:43.985535339Z - - - - -\n<165>1 2024-02-02T01:39:44.00561916Z - - - - -\n<165>1 2024-02-02T01:39:44.021614265Z - - - - -\n<165>1 2024-02-02T01:39:44.028320725Z - - - - -\n"}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 1}
2024-02-02T02:57:54.091Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:44.066483909Z - - - - -\n<165>1 2024-02-02T01:39:44.066571586Z - - - - -\n"}
2024-02-02T02:57:54.093Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T02:57:54.093Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:47.21888327Z - - - - -\n<165>1 2024-02-02T01:39:47.218909707Z - - - - -\n<165>1 2024-02-02T01:39:47.218912792Z - - - - -\n<165>1 2024-02-02T01:39:47.218929533Z - - - - -\n<165>1 2024-02-02T01:39:47.218936206Z - - - - -\n"}
2024-02-02T02:57:54.093Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 1}
2024-02-02T02:57:54.094Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:39:51.705871429Z - - - - -\n<165>1 2024-02-02T01:39:51.706171477Z - - - - -\n"}
2024-02-02T02:57:54.116Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 3}
2024-02-02T02:57:54.116Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:41:00.951436788Z - - - - -\n<165>1 2024-02-02T01:41:00.9516846Z - - - - -\n<165>1 2024-02-02T01:41:00.953376549Z - - - - -\n<165>1 2024-02-02T01:41:00.994626123Z - - - - -\n<165>1 2024-02-02T01:41:01.00254625Z - - - - -\n<165>1 2024-02-02T01:41:01.006394297Z - - - - -\n<165>1 2024-02-02T01:41:01.027355509Z - - - - -\n<165>1 2024-02-02T01:41:01.036324245Z - - - - -\n<165>1 2024-02-02T01:41:01.045382355Z - - - - -\n<165>1 2024-02-02T01:41:00.853779004Z - - - - -\n<165>1 2024-02-02T01:41:00.853810173Z - - - - -\n<165>1 2024-02-02T01:41:00.860203799Z - - - - -\n<165>1 2024-02-02T01:41:00.878495024Z - - - - -\n<165>1 2024-02-02T01:41:00.887904219Z - - - - -\n<165>1 2024-02-02T01:41:00.899426548Z - - - - -\n<165>1 2024-02-02T01:41:00.932107686Z - - - - -\n<165>1 2024-02-02T01:41:00.932135004Z - - - - -\n<165>1 2024-02-02T01:41:00.940560039Z - - - - -\n<165>1 2024-02-02T01:41:00.949863636Z - - - - -\n<165>1 2024-02-02T01:41:00.949935049Z - - - - -\n<165>1 2024-02-02T01:41:00.950422086Z - - - - -\n<165>1 2024-02-02T01:41:00.950593075Z - - - - -\n<165>1 2024-02-02T01:41:00.950787183Z - - - - -\n<165>1 2024-02-02T01:41:00.951071969Z - - - - -\n<165>1 2024-02-02T01:41:00.99433788Z - - - - -\n<165>1 2024-02-02T01:41:01.002245718Z - - - - -\n<165>1 2024-02-02T01:41:01.005996989Z - - - - -\n<165>1 2024-02-02T01:41:01.027073307Z - - - - -\n<165>1 2024-02-02T01:41:01.035930058Z - - - - -\n<165>1 2024-02-02T01:41:01.044970372Z - - - - -\n"}

As I said, I can send TCP/UDP package through nc command, to target machine.

[ec2-user@ip-10-3-21-159 opentelemetry-collector-contrib]$ echo -n "hello world" | nc -u -w1 23.21.46.218 5140

In target machine, I can see the UDP package:

[vitria@ip-10-0-2-189 ~]$ sudo tcpdump -i any udp port 5140 -X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
03:01:40.842737 IP ec2-34-229-196-25.compute-1.amazonaws.com.57565 > ip-10-0-2-189.ec2.internal.5140: UDP, length 11
	0x0000:  4500 0027 8ed0 4000 fe11 fa39 22e5 c419  E..'..@....9"...
	0x0010:  0a00 02bd e0dd 1414 0013 854c 6865 6c6c  ...........Lhell
	0x0020:  6f20 776f 726c 6400 0000 0000 0000       o.world.......
03:01:44.455238 IP ec2-34-229-196-25.compute-1.amazonaws.com.22803 > ip-10-0-2-189.ec2.internal.5140: UDP, length 11
	0x0000:  4500 0027 166c 4000 fe11 729e 22e5 c419  E..'.l@...r."...
	0x0010:  0a00 02bd 5913 1414 0013 0d17 6865 6c6c  ....Y.......hell
	0x0020:  6f20 776f 726c 6400 0000 0000 0000       o.world.......

Do you think any reason syslog exporter hang in connect()?

sender, err := connect(se.logger, se.config, se.tlsConfig)

@oppokui
Copy link
Author

oppokui commented Feb 2, 2024

@sumo-drosiek , I assume it is a bug. I didn't configure TLS, but the code still get tlsConfig then hang there.

My config file:

  syslog:
    endpoint: 23.21.46.218
    network: tcp
    port: 5140
    protocol: rfc5424
    retry_on_failure:
      enabled: false
      initial_interval: 1s
      max_elapsed_time: 10s
      max_interval: 40s
    sending_queue:
      enabled: true
      num_consumers: 200
      queue_size: 10000
      storage: file_storage/syslog
    timeout: 2s
    #tls: {}
    #  ca_file: certs/servercert.pem
    #  cert_file: certs/cert.pem
    #  key_file: certs/key.pem

My code change like this:

diff --git a/exporter/syslogexporter/sender.go b/exporter/syslogexporter/sender.go
index cd3d5d7009..b86d594c24 100644
--- a/exporter/syslogexporter/sender.go
+++ b/exporter/syslogexporter/sender.go
@@ -79,10 +79,13 @@ func (s *sender) dial() error {
        }
        var err error
        if s.tlsConfig != nil {
+                s.logger.Info("Start to connect with TLS", zap.String("addr", s.addr), zap.String("network", s.network))
                s.conn, err = tls.Dial("tcp", s.addr, s.tlsConfig)
        } else {
+                s.logger.Info("Start to connect ", zap.String("addr", s.addr), zap.String("network", s.network))
                s.conn, err = net.Dial(s.network, s.addr)
        }
+        s.logger.Info("connected ", zap.String("addr", s.addr))
        return err
 }

The log is: (it goes to tlsConfig path and hang there)

[ec2-user@ip-10-3-21-159 opentelemetry-collector-contrib]$ ./bin/otelcontribcol_linux_amd64 --config examples/config.yaml
2024-02-02T05:37:49.077Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/telemetry.go:59	Setting up own telemetry...
2024-02-02T05:37:49.077Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/telemetry.go:104	Serving metrics	{"address": "0.0.0.0:8888", "level": "Basic"}
2024-02-02T05:37:49.077Z	info	syslogexporter@v0.93.0/exporter.go:42	Syslog Exporter configured by Kui	{"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "23.21.46.218", "protocol": "rfc5424", "port": 5140}
2024-02-02T05:37:49.077Z	info	syslogexporter@v0.93.0/exporter.go:60	[Kui] newLogsExporter 	{"kind": "exporter", "data_type": "logs", "name": "syslog"}
2024-02-02T05:37:49.077Z	info	exporter@v0.93.1-0.20240130182548-89388addcc7f/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-02-02T05:37:49.079Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/service.go:140	Starting otelcontribcol...	{"Version": "0.93.0-dev", "NumCPU": 2}
2024-02-02T05:37:49.079Z	info	extensions/extensions.go:34	Starting extensions...
2024-02-02T05:37:49.079Z	info	extensions/extensions.go:37	Extension is starting...	{"kind": "extension", "name": "file_storage/syslog"}
2024-02-02T05:37:49.079Z	info	extensions/extensions.go:52	Extension started.	{"kind": "extension", "name": "file_storage/syslog"}
2024-02-02T05:37:57.204Z	info	internal/persistent_queue.go:420	Fetching items left for dispatch by consumers	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 200}
2024-02-02T05:37:57.524Z	info	internal/persistent_queue.go:462	Moved items for dispatching back to queue	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 200}
2024-02-02T05:37:57.524Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:06.46814996Z - - - - -\n<165>1 2024-02-02T01:51:06.53164851Z - - - - -\n<165>1 2024-02-02T01:51:06.531805394Z - - - - -\n<165>1 2024-02-02T01:51:06.46858342Z - - - - -\n"}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:08.009605695Z - - - - -\n<165>1 2024-02-02T01:51:08.035067151Z - - - - -\n<165>1 2024-02-02T01:51:08.009032724Z - - - - -\n<165>1 2024-02-02T01:51:08.009059795Z - - - - -\n"}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:07.684154017Z - - - - -\n<165>1 2024-02-02T01:51:07.684211927Z - - - - -\n<165>1 2024-02-02T01:51:07.684215765Z - - - - -\n<165>1 2024-02-02T01:51:07.683663934Z - - - - -\n<165>1 2024-02-02T01:51:07.68369057Z - - - - -\n"}
2024-02-02T05:37:57.525Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.527Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 3}
2024-02-02T05:37:57.527Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:08.069841173Z - - - - -\n<165>1 2024-02-02T01:51:08.069867282Z - - - - -\n<165>1 2024-02-02T01:51:08.075238002Z - - - - -\n<165>1 2024-02-02T01:51:08.081807712Z - - - - -\n<165>1 2024-02-02T01:51:08.094203637Z - - - - -\n<165>1 2024-02-02T01:51:08.099645176Z - - - - -\n<165>1 2024-02-02T01:51:08.1323686Z - - - - -\n<165>1 2024-02-02T01:51:08.132417501Z - - - - -\n<165>1 2024-02-02T01:51:08.145840035Z - - - - -\n<165>1 2024-02-02T01:51:08.157356001Z - - - - -\n<165>1 2024-02-02T01:51:08.171029732Z - - - - -\n<165>1 2024-02-02T01:51:08.184469148Z - - - - -\n<165>1 2024-02-02T01:51:08.213213327Z - - - - -\n<165>1 2024-02-02T01:51:08.21343848Z - - - - -\n<165>1 2024-02-02T01:51:08.228813297Z - - - - -\n<165>1 2024-02-02T01:51:08.238312204Z - - - - -\n<165>1 2024-02-02T01:51:08.238559472Z - - - - -\n<165>1 2024-02-02T01:51:08.239954232Z - - - - -\n<165>1 2024-02-02T01:51:08.240254783Z - - - - -\n<165>1 2024-02-02T01:51:08.241361111Z - - - - -\n<165>1 2024-02-02T01:51:08.239046814Z - - - - -\n<165>1 2024-02-02T01:51:08.239148983Z - - - - -\n<165>1 2024-02-02T01:51:08.239280786Z - - - - -\n<165>1 2024-02-02T01:51:08.239416693Z - - - - -\n"}
2024-02-02T05:37:57.527Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.530Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.530Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:08.287617197Z - - - - -\n<165>1 2024-02-02T01:51:08.287731481Z - - - - -\n<165>1 2024-02-02T01:51:08.29463088Z - - - - -\n<165>1 2024-02-02T01:51:08.332150597Z - - - - -\n<165>1 2024-02-02T01:51:08.337967888Z - - - - -\n<165>1 2024-02-02T01:51:08.342598452Z - - - - -\n<165>1 2024-02-02T01:51:08.382632112Z - - - - -\n<165>1 2024-02-02T01:51:08.287079398Z - - - - -\n<165>1 2024-02-02T01:51:08.287338432Z - - - - -\n<165>1 2024-02-02T01:51:08.294253575Z - - - - -\n<165>1 2024-02-02T01:51:08.331693835Z - - - - -\n<165>1 2024-02-02T01:51:08.337676634Z - - - - -\n<165>1 2024-02-02T01:51:08.342392219Z - - - - -\n<165>1 2024-02-02T01:51:08.382668762Z - - - - -\n<165>1 2024-02-02T01:51:08.420395833Z - - - - -\n<165>1 2024-02-02T01:51:08.420421313Z - - - - -\n"}
2024-02-02T05:37:57.530Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.530Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.530Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:11.571129075Z - - - - -\n<165>1 2024-02-02T01:51:11.571144878Z - - - - -\n<165>1 2024-02-02T01:51:11.571154345Z - - - - -\n<165>1 2024-02-02T01:51:11.570811567Z - - - - -\n<165>1 2024-02-02T01:51:11.570838728Z - - - - -\n"}
2024-02-02T05:37:57.531Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.532Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.532Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:11.91698882Z - - - - -\n<165>1 2024-02-02T01:51:11.942593511Z - - - - -\n<165>1 2024-02-02T01:51:11.952936695Z - - - - -\n<165>1 2024-02-02T01:51:11.960412799Z - - - - -\n<165>1 2024-02-02T01:51:11.992155586Z - - - - -\n<165>1 2024-02-02T01:51:11.992182956Z - - - - -\n<165>1 2024-02-02T01:51:12.003816096Z - - - - -\n<165>1 2024-02-02T01:51:12.015191215Z - - - - -\n<165>1 2024-02-02T01:51:12.026598906Z - - - - -\n<165>1 2024-02-02T01:51:12.039144847Z - - - - -\n<165>1 2024-02-02T01:51:11.916332807Z - - - - -\n<165>1 2024-02-02T01:51:11.916382993Z - - - - -\n"}
2024-02-02T05:37:57.532Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.535Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 3}
2024-02-02T05:37:57.535Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:12.070775255Z - - - - -\n<165>1 2024-02-02T01:51:12.070801991Z - - - - -\n<165>1 2024-02-02T01:51:12.082087691Z - - - - -\n<165>1 2024-02-02T01:51:12.090506984Z - - - - -\n<165>1 2024-02-02T01:51:12.100825933Z - - - - -\n<165>1 2024-02-02T01:51:12.10712558Z - - - - -\n<165>1 2024-02-02T01:51:12.107139411Z - - - - -\n<165>1 2024-02-02T01:51:12.108133831Z - - - - -\n<165>1 2024-02-02T01:51:12.108153637Z - - - - -\n<165>1 2024-02-02T01:51:12.110183241Z - - - - -\n<165>1 2024-02-02T01:51:12.169339324Z - - - - -\n<165>1 2024-02-02T01:51:12.175938404Z - - - - -\n<165>1 2024-02-02T01:51:12.182953591Z - - - - -\n<165>1 2024-02-02T01:51:12.23307588Z - - - - -\n<165>1 2024-02-02T01:51:12.234753127Z - - - - -\n<165>1 2024-02-02T01:51:12.237545394Z - - - - -\n<165>1 2024-02-02T01:51:12.107652532Z - - - - -\n<165>1 2024-02-02T01:51:12.10767749Z - - - - -\n<165>1 2024-02-02T01:51:12.10778156Z - - - - -\n<165>1 2024-02-02T01:51:12.108159538Z - - - - -\n<165>1 2024-02-02T01:51:12.16898954Z - - - - -\n<165>1 2024-02-02T01:51:12.175658427Z - - - - -\n<165>1 2024-02-02T01:51:12.182596611Z - - - - -\n<165>1 2024-02-02T01:51:12.232632547Z - - - - -\n<165>1 2024-02-02T01:51:12.234393052Z - - - - -\n<165>1 2024-02-02T01:51:12.237207355Z - - - - -\n"}
2024-02-02T05:37:57.535Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.535Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.535Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:12.266222498Z - - - - -\n<165>1 2024-02-02T01:51:12.31873711Z - - - - -\n<165>1 2024-02-02T01:51:12.318770963Z - - - - -\n<165>1 2024-02-02T01:51:12.266697123Z - - - - -\n"}
2024-02-02T05:37:57.535Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.536Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.536Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:51:13.471489426Z - - - - -\n<165>1 2024-02-02T01:51:13.471503496Z - - - - -\n<165>1 2024-02-02T01:51:13.471703453Z - - - - -\n<165>1 2024-02-02T01:51:13.471090375Z - - - - -\n<165>1 2024-02-02T01:51:13.471115722Z - - - - -\n"}
2024-02-02T05:37:57.536Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:37:57.728Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:37:57.728Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:52:21.232761965Z - - - - -\n<165>1 2024-02-02T01:52:21.23277923Z - - - - -\n<165>1 2024-02-02T01:52:21.232792276Z - - - - -\n<165>1 2024-02-02T01:52:21.232722676Z - - - - -\n<165>1 2024-02-02T01:52:21.232752134Z - - - - -\n"}
2024-02-02T05:37:57.728Z	info	syslogexporter@v0.93.0/sender.go:80	Start to connect with TLS	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}

After I comment out tlsConfig path and retry, the TCP/UDP package can be sent tout. I can see the empty syslog event in target machine now.
The code change:

diff --git a/exporter/syslogexporter/sender.go b/exporter/syslogexporter/sender.go
index cd3d5d7009..b502099811 100644
--- a/exporter/syslogexporter/sender.go
+++ b/exporter/syslogexporter/sender.go
@@ -78,11 +78,14 @@ func (s *sender) dial() error {
                s.conn = nil
        }
        var err error
-       if s.tlsConfig != nil {
-               s.conn, err = tls.Dial("tcp", s.addr, s.tlsConfig)
-       } else {
+       //if s.tlsConfig != nil {
+        //        s.logger.Info("Start to connect with TLS", zap.String("addr", s.addr), zap.String("network", s.network))
+       //      s.conn, err = tls.Dial("tcp", s.addr, s.tlsConfig)
+       //} else {
+                s.logger.Info("Start to connect ", zap.String("addr", s.addr), zap.String("network", s.network))
                s.conn, err = net.Dial(s.network, s.addr)
-       }
+       //}
+        s.logger.Info("connected ", zap.String("addr", s.addr))
        return err
 }

Then log look like:

[ec2-user@ip-10-3-21-159 opentelemetry-collector-contrib]$ ./bin/otelcontribcol_linux_amd64 --config examples/config.yaml
2024-02-02T05:48:55.971Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/telemetry.go:59	Setting up own telemetry...
2024-02-02T05:48:55.971Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/telemetry.go:104	Serving metrics	{"address": "0.0.0.0:8888", "level": "Basic"}
2024-02-02T05:48:55.971Z	info	syslogexporter@v0.93.0/exporter.go:42	Syslog Exporter configured by Kui	{"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "23.21.46.218", "protocol": "rfc5424", "port": 5140}
2024-02-02T05:48:55.971Z	info	syslogexporter@v0.93.0/exporter.go:60	[Kui] newLogsExporter 	{"kind": "exporter", "data_type": "logs", "name": "syslog"}
2024-02-02T05:48:55.971Z	info	exporter@v0.93.1-0.20240130182548-89388addcc7f/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-02-02T05:48:55.973Z	info	service@v0.93.1-0.20240130182548-89388addcc7f/service.go:140	Starting otelcontribcol...	{"Version": "0.93.0-dev", "NumCPU": 2}
2024-02-02T05:48:55.973Z	info	extensions/extensions.go:34	Starting extensions...
2024-02-02T05:48:55.973Z	info	extensions/extensions.go:37	Extension is starting...	{"kind": "extension", "name": "file_storage/syslog"}
2024-02-02T05:48:55.973Z	info	extensions/extensions.go:52	Extension started.	{"kind": "extension", "name": "file_storage/syslog"}
2024-02-02T05:49:03.732Z	info	internal/persistent_queue.go:420	Fetching items left for dispatch by consumers	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 200}
2024-02-02T05:49:04.143Z	info	internal/persistent_queue.go:462	Moved items for dispatching back to queue	{"kind": "exporter", "data_type": "logs", "name": "syslog", "numberOfItems": 200}
2024-02-02T05:49:04.144Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:49:04.144Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:53:23.908232952Z - - - - -\n<165>1 2024-02-02T01:53:23.908416588Z - - - - -\n<165>1 2024-02-02T01:53:23.908423958Z - - - - -\n<165>1 2024-02-02T01:53:23.907774118Z - - - - -\n<165>1 2024-02-02T01:53:23.907800117Z - - - - -\n"}
2024-02-02T05:49:04.144Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:49:04.144Z	info	syslogexporter@v0.93.0/sender.go:85	Start to connect 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:49:04.144Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:53:24.251967231Z - - - - -\n<165>1 2024-02-02T01:53:24.27620227Z - - - - -\n<165>1 2024-02-02T01:53:24.307903428Z - - - - -\n<165>1 2024-02-02T01:53:24.307941612Z - - - - -\n<165>1 2024-02-02T01:53:24.318364507Z - - - - -\n<165>1 2024-02-02T01:53:24.327664587Z - - - - -\n<165>1 2024-02-02T01:53:24.334023168Z - - - - -\n<165>1 2024-02-02T01:53:24.343369943Z - - - - -\n<165>1 2024-02-02T01:53:24.373976291Z - - - - -\n<165>1 2024-02-02T01:53:24.373995166Z - - - - -\n<165>1 2024-02-02T01:53:24.382439547Z - - - - -\n<165>1 2024-02-02T01:53:24.394806685Z - - - - -\n<165>1 2024-02-02T01:53:24.403229022Z - - - - -\n<165>1 2024-02-02T01:53:24.411417617Z - - - - -\n<165>1 2024-02-02T01:53:24.437077021Z - - - - -\n<165>1 2024-02-02T01:53:24.437100164Z - - - - -\n<165>1 2024-02-02T01:53:24.450837893Z - - - - -\n<165>1 2024-02-02T01:53:24.251905235Z - - - - -\n<165>1 2024-02-02T01:53:24.251931887Z - - - - -\n"}
2024-02-02T05:49:04.144Z	info	syslogexporter@v0.93.0/sender.go:85	Start to connect 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:49:04.145Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 3}
2024-02-02T05:49:04.145Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:53:24.462072095Z - - - - -\n<165>1 2024-02-02T01:53:24.462087113Z - - - - -\n<165>1 2024-02-02T01:53:24.462671678Z - - - - -\n<165>1 2024-02-02T01:53:24.462683935Z - - - - -\n<165>1 2024-02-02T01:53:24.462686726Z - - - - -\n<165>1 2024-02-02T01:53:24.462689123Z - - - - -\n<165>1 2024-02-02T01:53:24.496139951Z - - - - -\n<165>1 2024-02-02T01:53:24.518987252Z - - - - -\n<165>1 2024-02-02T01:53:24.520863198Z - - - - -\n<165>1 2024-02-02T01:53:24.549790272Z - - - - -\n<165>1 2024-02-02T01:53:24.565649188Z - - - - -\n<165>1 2024-02-02T01:53:24.566373487Z - - - - -\n<165>1 2024-02-02T01:53:24.607321646Z - - - - -\n<165>1 2024-02-02T01:53:24.463406041Z - - - - -\n<165>1 2024-02-02T01:53:24.463418076Z - - - - -\n<165>1 2024-02-02T01:53:24.464486324Z - - - - -\n<165>1 2024-02-02T01:53:24.49635823Z - - - - -\n<165>1 2024-02-02T01:53:24.519404702Z - - - - -\n<165>1 2024-02-02T01:53:24.521062062Z - - - - -\n<165>1 2024-02-02T01:53:24.550129267Z - - - - -\n<165>1 2024-02-02T01:53:24.565994873Z - - - - -\n<165>1 2024-02-02T01:53:24.566673489Z - - - - -\n<165>1 2024-02-02T01:53:24.607785591Z - - - - -\n"}
2024-02-02T05:49:04.145Z	info	syslogexporter@v0.93.0/sender.go:85	Start to connect 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:49:04.145Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 1}
2024-02-02T05:49:04.145Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:53:24.654603892Z - - - - -\n<165>1 2024-02-02T01:53:24.654627823Z - - - - -\n"}
2024-02-02T05:49:04.145Z	info	syslogexporter@v0.93.0/sender.go:85	Start to connect 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/sender.go:88	connected 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/exporter.go:108	[Kui] connected syslog	{"kind": "exporter", "data_type": "logs", "name": "syslog"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/sender.go:88	connected 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/exporter.go:108	[Kui] connected syslog	{"kind": "exporter", "data_type": "logs", "name": "syslog"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/exporter.go:84	[Kui] exportBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/exporter.go:99	[Kui] format data: 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "data": "<165>1 2024-02-02T01:53:27.806713077Z - - - - -\n<165>1 2024-02-02T01:53:27.80672943Z - - - - -\n<165>1 2024-02-02T01:53:27.806828874Z - - - - -\n<165>1 2024-02-02T01:53:27.806212567Z - - - - -\n<165>1 2024-02-02T01:53:27.806239626Z - - - - -\n"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/sender.go:85	Start to connect 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140", "network": "tcp"}
2024-02-02T05:49:04.146Z	info	syslogexporter@v0.93.0/sender.go:88	connected 	{"kind": "exporter", "data_type": "logs", "name": "syslog", "addr": "23.21.46.218:5140"}

@sumo-drosiek , can you help to fix it in next release?

Now the left issue for me is the syslog event mapping. I still get the empty body.
I will switch to this investigation, if you have more idea, please share to me. Very appreciate.

<165>1 2024-02-02T01:53:24.251967231Z - - - - -
<165>1 2024-02-02T01:53:24.27620227Z - - - - -
<165>1 2024-02-02T01:53:24.307903428Z - - - - -
<165>1 2024-02-02T01:53:24.307941612Z - - - - -
<165>1 2024-02-02T01:53:24.318364507Z - - - - -
<165>1 2024-02-02T01:53:24.327664587Z - - - - -
<165>1 2024-02-02T01:53:24.334023168Z - - - - -
<165>1 2024-02-02T01:53:24.343369943Z - - - - -
<165>1 2024-02-02T01:53:24.373976291Z - - - - -
<165>1 2024-02-02T01:53:24.373995166Z - - - - -
<165>1 2024-02-02T01:53:24.382439547Z - - - - -
<165>1 2024-02-02T01:53:24.394806685Z - - - - -
<165>1 2024-02-02T01:53:24.403229022Z - - - - -
<165>1 2024-02-02T01:53:24.411417617Z - - - - -
<165>1 2024-02-02T01:53:24.437077021Z - - - - -
<165>1 2024-02-02T01:53:24.437100164Z - - - - -
<165>1 2024-02-02T01:53:24.450837893Z - - - - -
<165>1 2024-02-02T01:53:24.251905235Z - - - - -
<165>1 2024-02-02T01:53:24.251931887Z - - - - -
<165>1 2024-02-02T01:53:24.462072095Z - - - - -
<165>1 2024-02-02T01:53:24.462087113Z - - - - -
<165>1 2024-02-02T01:53:24.462671678Z - - - - -
<165>1 2024-02-02T01:53:24.462683935Z - - - - -
<165>1 2024-02-02T01:53:24.462686726Z - - - - -

@sumo-drosiek
Copy link
Member

I don't know "timeUnixNano" is required or not as filelog can identify the timestamp from the k8s log file. I assume syslog can identify the format too. No need to construct it manually, right?

Yes, log tismtamp is used to built syslog message

After I comment out tlsConfig path and retry, the TCP/UDP package can be sent tout. I can see the empty syslog event in target machine now.

Oh, I forget about that. In order to disable tls, the following configuration has to be used:

syslog:
  tls:
    insecure: true

Now the left issue for me is the syslog event mapping. I still get the empty body.
I will switch to this investigation, if you have more idea, please share to me. Very appreciate.

I recommend to change verbosity to detailed for debug/logging exporter as it helps to see he structure of data being passed to syslog exporter

@oppokui
Copy link
Author

oppokui commented Feb 2, 2024

woooooo, it definitely is a bug. I didn't set tls, or set tls as empty map, it should mean insecure. Hang there without any warning is definitely wrong. It blocked me two weeks.

Now I can see syslog format in target machine, with timestamp, appname and message. Just no hostname.

<165>1 2024-02-02T07:13:43.283282596Z - hotrod-frontend-57646dc7db-jfmpd - - - 2024-02-02T07:13:43.283282596Z stderr F 2024-02-02T07:13:43.283Z	INFO	route/client.go:47	Finding route	{"service": "frontend", "component": "route_client", "trace_id": "3b6ef4e0fee52e580922a7c4b4c8e0b1", "span_id": "d167d252d87e249d", "pickup": "799,772", "dropoff": "115,277"}
<165>1 2024-02-02T07:13:43.319231377Z - hotrod-frontend-57646dc7db-jfmpd - - - 2024-02-02T07:13:43.319231377Z stderr F 2024-02-02T07:13:43.319Z	INFO	route/client.go:47	Finding route	{"service": "frontend", "component": "route_client", "trace_id": "3b6ef4e0fee52e580922a7c4b4c8e0b1", "span_id": "d167d252d87e249d", "pickup": "959,640", "dropoff": "115,277"}
<165>1 2024-02-02T07:13:43.327171187Z - hotrod-frontend-57646dc7db-jfmpd - - - 2024-02-02T07:13:43.327171187Z stderr F 2024-02-02T07:13:43.327Z	INFO	route/client.go:47	Finding route	{"service": "frontend", "component": "route_client", "trace_id": "3b6ef4e0fee52e580922a7c4b4c8e0b1", "span_id": "d167d252d87e249d", "pickup": "557,17", "dropoff": "115,277"}

@oppokui
Copy link
Author

oppokui commented Feb 2, 2024

@sumo-drosiek I enable detailed verbosity in debug exporter, now I can see the structure of input event to syslog exporter.
I saw resource_attributes contain what I want "k8s.node.name", one ResourceLog contains multi LogRecord. Each LogRecord represent one log entry.
how can I populate "k8s.node.name" from ResourceLog to LogRecord? Is it possible?
I saw "transform" processor has different context log/resource, how to access resource attributes in log context?

2024-02-02T07:25:32.782Z	info	fileconsumer/file.go:268	Started watching file	{"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/kube-system_ebs-csi-node-2vsqs_ca32605a-286b-4828-8fe1-d5251f2cb193/liveness-probe/0.log"}
2024-02-02T07:25:33.281Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 2, "log records": 5}
2024-02-02T07:25:33.281Z	info	ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> k8s.pod.name: Str(hotrod-customer-746549d779-69wld)
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(4755e27a-7b2a-4069-b521-d167c02bca70)
     -> k8s.container.name: Str(hotrod-customer)
     -> k8s.namespace.name: Str(example-hotrod)
     -> k8s.pod.start_time: Str(2024-01-31T19:19:20Z)
     -> k8s.node.name: Str(ip-10-3-21-159.ec2.internal)
     -> app: Str(hotrod-customer)
     -> pod-template-hash: Str(746549d779)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-02-02 07:25:33.182729519 +0000 UTC
Timestamp: 2024-02-02 07:25:33.025711832 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2024-02-02T07:25:33.025711832Z stderr F 2024-02-02T07:25:33.025Z	INFO	customer/server.go:78	HTTP request received	{"service": "customer", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "87063917b784b1b3", "method": "GET", "url": "/customer?customer=731"})
Attributes:
     -> log.file.path: Str(/var/log/pods/example-hotrod_hotrod-customer-746549d779-69wld_4755e27a-7b2a-4069-b521-d167c02bca70/hotrod-customer/0.log)
     -> time: Str(2024-02-02T07:25:33.025711832Z)
     -> log.iostream: Str(stderr)
     -> logtag: Str(F)
     -> log: Str(2024-02-02T07:25:33.025Z	INFO	customer/server.go:78	HTTP request received	{"service": "customer", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "87063917b784b1b3", "method": "GET", "url": "/customer?customer=731"})
     -> appname: Str(hotrod-customer-746549d779-69wld)
     -> message: Str(2024-02-02T07:25:33.025711832Z stderr F 2024-02-02T07:25:33.025Z	INFO	customer/server.go:78	HTTP request received	{"service": "customer", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "87063917b784b1b3", "method": "GET", "url": "/customer?customer=731"})
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-02-02 07:25:33.182854577 +0000 UTC
Timestamp: 2024-02-02 07:25:33.025725188 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2024-02-02T07:25:33.025725188Z stderr F 2024-02-02T07:25:33.025Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "87063917b784b1b3", "customer_id": 731})
Attributes:
     -> log.file.path: Str(/var/log/pods/example-hotrod_hotrod-customer-746549d779-69wld_4755e27a-7b2a-4069-b521-d167c02bca70/hotrod-customer/0.log)
     -> logtag: Str(F)
     -> log: Str(2024-02-02T07:25:33.025Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "87063917b784b1b3", "customer_id": 731})
     -> time: Str(2024-02-02T07:25:33.025725188Z)
     -> appname: Str(hotrod-customer-746549d779-69wld)
     -> log.iostream: Str(stderr)
     -> message: Str(2024-02-02T07:25:33.025725188Z stderr F 2024-02-02T07:25:33.025Z	INFO	customer/database.go:76	Loading customer	{"service": "customer", "component": "mysql", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "87063917b784b1b3", "customer_id": 731})
Trace ID: 
Span ID: 
Flags: 0
LogRecord #2
ObservedTimestamp: 2024-02-02 07:25:33.182875474 +0000 UTC
Timestamp: 2024-02-02 07:25:33.025727928 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2024-02-02T07:25:33.025727928Z stderr F 2024-02-02T07:25:33.025Z	INFO	tracing/mutex.go:68	Acquired lock; 0 transactions waiting behind	{"service": "customer", "component": "mysql", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "1f899df0f5e0926f", "waiters": "[]"})
Attributes:
     -> time: Str(2024-02-02T07:25:33.025727928Z)
     -> log.file.path: Str(/var/log/pods/example-hotrod_hotrod-customer-746549d779-69wld_4755e27a-7b2a-4069-b521-d167c02bca70/hotrod-customer/0.log)
     -> log.iostream: Str(stderr)
     -> log: Str(2024-02-02T07:25:33.025Z	INFO	tracing/mutex.go:68	Acquired lock; 0 transactions waiting behind	{"service": "customer", "component": "mysql", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "1f899df0f5e0926f", "waiters": "[]"})
     -> appname: Str(hotrod-customer-746549d779-69wld)
     -> logtag: Str(F)
     -> message: Str(2024-02-02T07:25:33.025727928Z stderr F 2024-02-02T07:25:33.025Z	INFO	tracing/mutex.go:68	Acquired lock; 0 transactions waiting behind	{"service": "customer", "component": "mysql", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "1f899df0f5e0926f", "waiters": "[]"})
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #1
Resource SchemaURL: 
Resource attributes:
     -> k8s.container.name: Str(hotrod-frontend)
     -> k8s.namespace.name: Str(example-hotrod)
     -> k8s.pod.name: Str(hotrod-frontend-57646dc7db-jfmpd)
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(1319ff23-b3a4-45c7-b816-95592dd49267)
     -> k8s.pod.start_time: Str(2024-01-31T16:12:06Z)
     -> k8s.node.name: Str(ip-10-3-21-159.ec2.internal)
     -> app: Str(hotrod-frontend)
     -> pod-template-hash: Str(57646dc7db)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-02-02 07:25:33.182783632 +0000 UTC
Timestamp: 2024-02-02 07:25:33.025305188 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2024-02-02T07:25:33.025305188Z stderr F 2024-02-02T07:25:33.025Z	INFO	frontend/server.go:107	HTTP request received	{"service": "frontend", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "775b38a9a7e00e45", "method": "GET", "url": "/dispatch?nonse=0.07647454267267473&customer=731"})
Attributes:
     -> appname: Str(hotrod-frontend-57646dc7db-jfmpd)
     -> log.file.path: Str(/var/log/pods/example-hotrod_hotrod-frontend-57646dc7db-jfmpd_1319ff23-b3a4-45c7-b816-95592dd49267/hotrod-frontend/0.log)
     -> log.iostream: Str(stderr)
     -> log: Str(2024-02-02T07:25:33.025Z	INFO	frontend/server.go:107	HTTP request received	{"service": "frontend", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "775b38a9a7e00e45", "method": "GET", "url": "/dispatch?nonse=0.07647454267267473&customer=731"})
     -> time: Str(2024-02-02T07:25:33.025305188Z)
     -> logtag: Str(F)
     -> message: Str(2024-02-02T07:25:33.025305188Z stderr F 2024-02-02T07:25:33.025Z	INFO	frontend/server.go:107	HTTP request received	{"service": "frontend", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "775b38a9a7e00e45", "method": "GET", "url": "/dispatch?nonse=0.07647454267267473&customer=731"})
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-02-02 07:25:33.182866291 +0000 UTC
Timestamp: 2024-02-02 07:25:33.025334435 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2024-02-02T07:25:33.025334435Z stderr F 2024-02-02T07:25:33.025Z	INFO	customer/client.go:47	Getting customer	{"service": "frontend", "component": "customer_client", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "775b38a9a7e00e45", "customer_id": 731})
Attributes:
     -> time: Str(2024-02-02T07:25:33.025334435Z)
     -> log.iostream: Str(stderr)
     -> logtag: Str(F)
     -> log.file.path: Str(/var/log/pods/example-hotrod_hotrod-frontend-57646dc7db-jfmpd_1319ff23-b3a4-45c7-b816-95592dd49267/hotrod-frontend/0.log)
     -> log: Str(2024-02-02T07:25:33.025Z	INFO	customer/client.go:47	Getting customer	{"service": "frontend", "component": "customer_client", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "775b38a9a7e00e45", "customer_id": 731})
     -> appname: Str(hotrod-frontend-57646dc7db-jfmpd)
     -> message: Str(2024-02-02T07:25:33.025334435Z stderr F 2024-02-02T07:25:33.025Z	INFO	customer/client.go:47	Getting customer	{"service": "frontend", "component": "customer_client", "trace_id": "55e10299bb4f5175b0d76210f0f786d6", "span_id": "775b38a9a7e00e45", "customer_id": 731})
Trace ID: 
Span ID: 
Flags: 0
	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-02-02T07:25:33.281Z	info	syslogexporter@v0.93.0/exporter.go:124	[Kui] exportNonBatch	{"kind": "exporter", "data_type": "logs", "name": "syslog", "logs": 2}

@sumo-drosiek
Copy link
Member

how can I populate "k8s.node.name" from ResourceLog to LogRecord? Is it possible?

Yes, please use the following:

transform:
  error_mode: ignore
  log_statements:
    - context: log
      statements:
        - set(attributes['k8s.node.name'], resource.attributes['k8s.node.name'])

@oppokui
Copy link
Author

oppokui commented Feb 2, 2024

Well done!

For structure data, how can I enrich multi sub-nodes into attributes.structured_data?

@sumo-drosiek
Copy link
Member

Well done!

For structure data, how can I enrich multi sub-nodes into attributes.structured_data?

The following should do the job:

  transform:
    log_statements:
      - context: log
        statements:
        - set(attributes["structured_data"]["field"], attributes["attribute_1"])
        - set(attributes["structured_data"]["another_field"], attributes["attribute_2"])

@oppokui
Copy link
Author

oppokui commented Feb 2, 2024

Thanks very much, @sumo-drosiek ! You save my life!

@oppokui oppokui closed this as completed Feb 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request exporter/syslog needs triage New item requiring triage
Projects
None yet
Development

No branches or pull requests

4 participants