Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logsExporter does not work #18

Open
debajyoti-truefoundry opened this issue Dec 30, 2024 · 3 comments
Open

logsExporter does not work #18

debajyoti-truefoundry opened this issue Dec 30, 2024 · 3 comments

Comments

@debajyoti-truefoundry
Copy link

Error log:

unexpected error error_class=Elasticsearch::UnsupportedProductError error="The client noticed that the server is not Elasticsearch and we do not support this unknown product."

Chart detail:

repo_url: https://siglens.github.io/charts
chart: siglens
version: 0.1.3

Siglens image: siglens/siglens:1.0.7
Log exporter image: fluent/fluentd-kubernetes-daemonset:v1.14-debian-elasticsearch7-1

@nkunal
Copy link

nkunal commented Dec 30, 2024

could you try using the ....debian-elasticsearch7-9 , in SigLen's server.yaml there is a config https://github.com/siglens/siglens/blob/develop/server.yaml#L19 where you can specify which ES version to emulate from SigLens side.
Other option is to set the server.yaml's esVersion to the same value as what the fluentd daemonset is expecting

@debajyoti-truefoundry
Copy link
Author

Hello. I will continue to use the Promtail integration. Thanks. However, it would be better to upgrade the helm chart's default value so that it works out of the box.

@Macbeth98
Copy link
Contributor

Hi @debajyoti-truefoundry,

I executed the Helm chart using the default values.yaml, and it worked as expected. I did not observe any of the log messages you mentioned in your report.

  1. I first deployed the Helm chart on my local Docker Desktop with Kubernetes enabled. The logs-exporter pod ran without any issues, and I was able to view the logs in the SigLens UI under the kubernetes-logs index.

  2. Next, I deployed the same Helm chart on a GKE cluster. Initially, there was a parsing issue with Fluentd, but after resolving that (PR: fix: fluentd logs parsing format issue #22), the logs-exporter also worked fine on GKE. I was able to view the logs in the SigLens UI without any further issues.

For your reference, I have attached the logs from the logs-exporter pod that I observed. Could you please share more details about your setup, including where and how you were running the Helm charts?

Additionally, could you try a quick test by updating the image pull policy to Always and redeploying the Helm chart? While I don't believe this is the root cause of the issue, it's the only potential factor I can think of at the moment. The fluentd image specified in the Helm chart is supposed to be compatible with our Elasticsearch interface.

Thanks!

kubectl logs siglens-logs-exporter-77gd7                                                                                                                                    ─╯
2025-01-03 02:08:35 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-concat' version '2.5.0'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-dedot_filter' version '1.0.0'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.14'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.1.5'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.2'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-json-in-json-2' version '1.0.2'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.11.1'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-parser-cri' version '0.1.1'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.3'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
2025-01-03 02:08:36 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.5'
2025-01-03 02:08:36 +0000 [info]: gem 'fluentd' version '1.14.6'
2025-01-03 02:08:36 +0000 [warn]: define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead
2025-01-03 02:08:36 +0000 [info]: using configuration file: <ROOT>
  <match fluent.**>
    @type null
  </match>
  <source>
    @type tail
    path "/var/log/containers/*.log"
    pos_file "/var/log/containers.log.pos"
    tag "kubernetes.*"
    read_from_head true
    <parse>
      @type "multi_format"
      unmatched_lines
      <pattern>
        format json
      </pattern>
      <pattern>
        format none
      </pattern>
    </parse>
  </source>
  <filter kubernetes.**>
    @type kubernetes_metadata
  </filter>
  <filter kubernetes.**>
    @type record_transformer
    enable_ruby true
    remove_keys $.kubernetes.namespace_id,$.kubernetes.pod_id,$.kubernetes.master_url,$.kubernetes.container_image_id,$.docker.container_id,$.docker.namespace_labels
  </filter>
  <match kubernetes.**>
    @type elasticsearch_dynamic
    @log_level "info"
    host "siglens"
    port 8081
    path "/elastic"
    scheme http
    ssl_verify false
    ssl_version TLSv1_2
    reload_connections true
    index_name "kubernetes-logs"
    <buffer>
      flush_mode interval
      flush_interval 30s
      flush_thread_count 2
      retry_max_interval 30
      retry_forever true
    </buffer>
  </match>
</ROOT>
2025-01-03 02:08:36 +0000 [info]: starting fluentd-1.14.6 pid=7 ruby="2.7.5"
2025-01-03 02:08:37 +0000 [info]: spawn command to main:  cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/fluentd/vendor/bundle/ruby/2.7.0/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--gemfile", "/fluentd/Gemfile", "-r", "/fluentd/vendor/bundle/ruby/2.7.0/gems/fluent-plugin-elasticsearch-5.1.5/lib/fluent/plugin/elasticsearch_simple_sniffer.rb", "--under-supervisor"]
2025-01-03 02:08:38 +0000 [info]: adding match pattern="fluent.**" type="null"
2025-01-03 02:08:38 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2025-01-03 02:08:38 +0000 [info]: adding filter pattern="kubernetes.**" type="record_transformer"
2025-01-03 02:08:38 +0000 [info]: adding match pattern="kubernetes.**" type="elasticsearch_dynamic"
2025-01-03 02:08:40 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. no address for siglens (Resolv::ResolvError)
2025-01-03 02:08:40 +0000 [warn]: #0 Remaining retry: 14. Retry to communicate after 2 second(s).
2025-01-03 02:08:44 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. no address for siglens (Resolv::ResolvError)
2025-01-03 02:08:44 +0000 [warn]: #0 Remaining retry: 13. Retry to communicate after 4 second(s).
2025-01-03 02:08:52 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. no address for siglens (Resolv::ResolvError)
2025-01-03 02:08:52 +0000 [warn]: #0 Remaining retry: 12. Retry to communicate after 8 second(s).
2025-01-03 02:08:52 +0000 [info]: adding source type="tail"
2025-01-03 02:08:52 +0000 [warn]: #0 define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead
2025-01-03 02:08:52 +0000 [info]: #0 starting fluentd worker pid=18 ppid=7 worker=0
2025-01-03 02:08:52 +0000 [info]: #0 following tail of /var/log/containers/fluentbit-gke-8245k_kube-system_fluentbit-gke-init-5254b7b48bb1939df0ab16285726b3707c9043eaa0537a88aff55e6f47f3a497.log
2025-01-03 02:08:52 +0000 [info]: #0 following tail of /var/log/containers/event-exporter-gke-547c84d95b-trhbm_kube-system_event-exporter-5d78b7878b5ab396aa73ca674c582ceaca609a831e3294f442699ff0d5048d45.log
2025-01-03 02:08:53 +0000 [info]: #0 fluentd worker is now running worker=0
2025-01-03 02:09:16 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 9, pod_cache_api_updates: 9, id_cache_miss: 9, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:09:25 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::KubernetesMetadataFilter, Fluent::Plugin::RecordTransformerFilter] uses `#filter_stream` method.
2025-01-03 02:09:46 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:10:23 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:10:53 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:11:25 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:11:57 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:12:29 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:12:59 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:13:30 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:14:02 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:14:33 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9
2025-01-03 02:15:04 +0000 [info]: #0 stats - namespace_cache_size: 9, pod_cache_size: 24, namespace_cache_api_updates: 10, pod_cache_api_updates: 10, id_cache_miss: 10, pod_cache_watch_updates: 3, pod_cache_host_updates: 24, namespace_cache_host_updates: 9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants