You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fluentd by default only opens the metrics port on 0.0.0.0:24231, ie. not exposing it for IPv6.
Upstream issue: fluent/fluentd#3001 (it seems complicated though, because a library needs to be updated/switched)
A long-open PR for the fluentd helm repository (fluent/helm-charts#447) proposes just adding a second Prometheus source. Seems a valid aproach to me, given fluentd is running in a container and nobody will have blocked the IPv6 port as feared by the upstream issue, also it was originally denied (fluent/fluent-plugin-prometheus#151).
add a bind configuration option to the metrics type; kind of straight forward and would allow to "choose" between IPv4/IPv6, but needs handling the option for the other implementations (syslog tailer, fluentbit) -- I consider this feasible and we would provide a PR, but needs discussion first
Add a "generic" configuration injection option (unless there is one we missed), such that one can cleanly inject a matching configuration block. Not really a nice solution for this issue, but probably generally good to have for special use cases and workarounds.
...?
Expected behaviour:
Metrics are either available on IPv4 and IPv6 by default (preferred, as "just works"), or the behavior can be toggled, such that IPv6 only clusters can scrape fluentd metrics.
Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people
gain an understanding of the problem.
Additional context:
Add any other context about the problem here.
Currently, IPv6 only clusters cannot scrape metrics from fluentd, as fluentd's `bind` default is `0.0.0.0`. Any IPv6 only cluster will fail scrapes.
fluentd currently does not support binding both IPv6 and IPv4 in a single source. Adding a second IPv6 source makes fluentd listen on both IPv4 and IPv6, allowing scrapes universally.
Fixeskube-logging#1835
Currently, IPv6 only clusters cannot scrape metrics from fluentd, as fluentd's `bind` default is `0.0.0.0`. Any IPv6 only cluster will fail scrapes.
fluentd currently does not support binding both IPv6 and IPv4 in a single source. Adding a second IPv6 source makes fluentd listen on both IPv4 and IPv6, allowing scrapes universally.
Fixeskube-logging#1835
Signed-off-by: Jens Erat <jens.erat@telekom.de>
Describe the bug:
fluentd by default only opens the metrics port on
0.0.0.0:24231
, ie. not exposing it for IPv6.Upstream issue: fluent/fluentd#3001 (it seems complicated though, because a library needs to be updated/switched)
A long-open PR for the fluentd helm repository (fluent/helm-charts#447) proposes just adding a second Prometheus source. Seems a valid aproach to me, given fluentd is running in a container and nobody will have blocked the IPv6 port as feared by the upstream issue, also it was originally denied (fluent/fluent-plugin-prometheus#151).
logging-operator currently does not expose configuration for the required monitoring settings:
Possible solutions I can think of:
Expected behaviour:
Metrics are either available on IPv4 and IPv6 by default (preferred, as "just works"), or the behavior can be toggled, such that IPv6 only clusters can scrape fluentd metrics.
Steps to reproduce the bug:
Steps to reproduce the bug should be clear and easily reproducible to help people
gain an understanding of the problem.
Additional context:
Add any other context about the problem here.
Environment details:
/kind bug
The text was updated successfully, but these errors were encountered: