Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fluentd metrics cannot be scraped on IPv6 only clusters #1835

Closed
JensErat opened this issue Oct 30, 2024 · 0 comments · Fixed by #1836
Closed

fluentd metrics cannot be scraped on IPv6 only clusters #1835

JensErat opened this issue Oct 30, 2024 · 0 comments · Fixed by #1836
Labels
bug Something isn't working

Comments

@JensErat
Copy link
Contributor

JensErat commented Oct 30, 2024

Describe the bug:

fluentd by default only opens the metrics port on 0.0.0.0:24231, ie. not exposing it for IPv6.

Upstream issue: fluent/fluentd#3001 (it seems complicated though, because a library needs to be updated/switched)

A long-open PR for the fluentd helm repository (fluent/helm-charts#447) proposes just adding a second Prometheus source. Seems a valid aproach to me, given fluentd is running in a container and nobody will have blocked the IPv6 port as feared by the upstream issue, also it was originally denied (fluent/fluent-plugin-prometheus#151).

logging-operator currently does not expose configuration for the required monitoring settings:

<source>
    @type prometheus
    port {{ .Monitor.Port }}
{{- if .Monitor.Path }}
    metrics_path {{ .Monitor.Path }}
{{- end }}
</source>

Possible solutions I can think of:

  • generally open IPv6 like the helm chart PR above does expose fluentd metrics also on IPv6 #1836
  • add a bind configuration option to the metrics type; kind of straight forward and would allow to "choose" between IPv4/IPv6, but needs handling the option for the other implementations (syslog tailer, fluentbit) -- I consider this feasible and we would provide a PR, but needs discussion first
  • Add a "generic" configuration injection option (unless there is one we missed), such that one can cleanly inject a matching configuration block. Not really a nice solution for this issue, but probably generally good to have for special use cases and workarounds.
  • ...?

Expected behaviour:

Metrics are either available on IPv4 and IPv6 by default (preferred, as "just works"), or the behavior can be toggled, such that IPv6 only clusters can scrape fluentd metrics.

Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people
gain an understanding of the problem.

Additional context:
Add any other context about the problem here.

Environment details:

  • Kubernetes version (e.g. v1.15.2):
  • Cloud-provider/provisioner (e.g. AKS, GKE, EKS, PKE etc):
  • logging-operator version (e.g. 2.1.1): 4.10.0/from master
  • Install method (e.g. helm or static manifests): helm
  • Logs from the misbehaving component (and any other relevant logs):
  • Resource definition (possibly in YAML format) that caused the issue, without sensitive data:

/kind bug

@JensErat JensErat added the bug Something isn't working label Oct 30, 2024
JensErat added a commit to JensErat/logging-operator that referenced this issue Oct 30, 2024
Currently, IPv6 only clusters cannot scrape metrics from fluentd, as fluentd's `bind` default is `0.0.0.0`. Any IPv6 only cluster will fail scrapes.

fluentd currently does not support binding both IPv6 and IPv4 in a single source. Adding a second IPv6 source makes fluentd listen on both IPv4 and IPv6, allowing scrapes universally.

Fixes kube-logging#1835
@pepov pepov closed this as completed in 386eaf0 Nov 4, 2024
JensErat added a commit to JensErat/logging-operator that referenced this issue Dec 20, 2024
Currently, IPv6 only clusters cannot scrape metrics from fluentd, as fluentd's `bind` default is `0.0.0.0`. Any IPv6 only cluster will fail scrapes.

fluentd currently does not support binding both IPv6 and IPv4 in a single source. Adding a second IPv6 source makes fluentd listen on both IPv4 and IPv6, allowing scrapes universally.

Fixes kube-logging#1835

Signed-off-by: Jens Erat <jens.erat@telekom.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant