Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update kafka exporter dependency #6778

Merged
merged 26 commits into from
Apr 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
2356677
update agent to local exporter
wildum Mar 25, 2024
a85ef75
remove prune interval argument
wildum Mar 26, 2024
090568a
add kafka exporter integration test
wildum Mar 27, 2024
e849aa0
replace davidmparrott kafka exporter by wildum kafka exporter
wildum Mar 27, 2024
990cdb0
update doc
wildum Mar 27, 2024
4f4eab1
update unit test
wildum Mar 27, 2024
3a0ba27
merge main
wildum Mar 27, 2024
584a942
update converter
wildum Mar 27, 2024
9acfb88
fix integration test
wildum Mar 27, 2024
f746a14
integration ci test
wildum Mar 27, 2024
6a6aee1
integration ci test
wildum Mar 27, 2024
8f1ef29
integration ci test
wildum Mar 27, 2024
9dee72d
add longer sleep
wildum Mar 27, 2024
2d3aab8
add notes about metrics rename
wildum Apr 3, 2024
4436f87
fix copy paste mistake with metric name
wildum Apr 4, 2024
7f9cd3f
avoid breaking changes by re-introducing pruneIntervalSeconds as a no-op
wildum Apr 5, 2024
aea63be
change kafka_exporter ref from wildum to grafana
wildum Apr 9, 2024
c0d1958
merge main
wildum Apr 9, 2024
84ceaf8
Merge branch 'main' into update-kafka-exporter-dependency
wildum Apr 9, 2024
2ad5b9c
Update docs/sources/flow/reference/components/prometheus.exporter.kaf…
wildum Apr 10, 2024
0ec7677
fix unit tests
wildum Apr 10, 2024
87c52c5
Merge branch 'update-kafka-exporter-dependency' of github.com:grafana…
wildum Apr 10, 2024
deb7785
put the pruneIntervalSeconds arg in converter
wildum Apr 10, 2024
789519c
Update CHANGELOG.md
wildum Apr 11, 2024
d535338
merge main
wildum Apr 11, 2024
00e7914
fix doc
wildum Apr 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,16 @@ Main (unreleased)

### Enhancements

- Update `prometheus.exporter.kafka` with the following functionalities (@wildum):
* GSSAPI config
* enable/disable PA_FX_FAST
* set a TLS server name
* show the offset/lag for all consumer group or only the connected ones
* set the minimum number of topics to monitor
* enable/disable auto-creation of requested topics if they don't already exist
* regex to exclude topics / groups
* added metric kafka_broker_info

- Add support for importing folders as single module to `import.file`. (@wildum)

- Add support for importing directories as single module to `import.git`. (@wildum)
Expand All @@ -46,6 +56,9 @@ Main (unreleased)

- Add conversion from static to flow mode for `loki.source.windowsevent` via `legacy_bookmark_path`. (@mattdurham)

- In `prometheus.exporter.kafka`, the interpolation table used to compute estimated lag metrics is now pruned
on `metadata_refresh_interval` instead of `prune_interval_seconds`. (@wildum)

- Add ability to convert static mode positions file to `loki.source.file` compatible via `legacy_positions_file` argument. (@mattdurham)

- Added support for `otelcol` configuration conversion in `grafana-agent convert` and `grafana-agent run` commands. (@rfratto, @erikbaranowski, @tpaschalis, @hainenber)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ title: prometheus.exporter.kafka
# prometheus.exporter.kafka

The `prometheus.exporter.kafka` component embeds
[kafka_exporter](https://github.com/davidmparrott/kafka_exporter) for collecting metrics from a kafka server.
[kafka_exporter](https://github.com/grafana/kafka_exporter) for collecting metrics from a kafka server.

## Usage

Expand All @@ -27,30 +27,42 @@ prometheus.exporter.kafka "LABEL" {
You can use the following arguments to configure the exporter's behavior.
Omitted fields take their default values.

| Name | Type | Description | Default | Required |
| --------------------------- | --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- |
| `kafka_uris` | `array(string)` | Address array (host:port) of Kafka server. | | yes |
| `instance` | `string` | The`instance`label for metrics, default is the hostname:port of the first kafka_uris. You must manually provide the instance value if there is more than one string in kafka_uris. | | no |
| `use_sasl` | `bool` | Connect using SASL/PLAIN. | | no |
| `use_sasl_handshake` | `bool` | Only set this to false if using a non-Kafka SASL proxy. | `false` | no |
| `sasl_username` | `string` | SASL user name. | | no |
| `sasl_password` | `string` | SASL user password. | | no |
| `sasl_mechanism` | `string` | The SASL SCRAM SHA algorithm sha256 or sha512 as mechanism. | | no |
| `use_tls` | `bool` | Connect using TLS. | | no |
| `ca_file` | `string` | The optional certificate authority file for TLS client authentication. | | no |
| `cert_file` | `string` | The optional certificate file for TLS client authentication. | | no |
| `key_file` | `string` | The optional key file for TLS client authentication. | | no |
| `insecure_skip_verify` | `bool` | If set to true, the server's certificate will not be checked for validity. This makes your HTTPS connections insecure. | | no |
| `kafka_version` | `string` | Kafka broker version. | `2.0.0` | no |
| `use_zookeeper_lag` | `bool` | If set to true, use a group from zookeeper. | | no |
| `zookeeper_uris` | `array(string)` | Address array (hosts) of zookeeper server. | | no |
| `kafka_cluster_name` | `string` | Kafka cluster name. | | no |
| `metadata_refresh_interval` | `duration` | Metadata refresh interval. | `1m` | no |
| `allow_concurrency` | `bool` | If set to true, all scrapes trigger Kafka operations. Otherwise, they will share results. WARNING: Disable this on large clusters. | `true` | no |
| `max_offsets` | `int` | The maximum number of offsets to store in the interpolation table for a partition. | `1000` | no |
| `prune_interval_seconds` | `int` | How frequently should the interpolation table be pruned, in seconds. | `30` | no |
| `topics_filter_regex` | `string` | Regex filter for topics to be monitored. | `.*` | no |
| `groups_filter_regex` | `string` | Regex filter for consumer groups to be monitored. | `.*` | no |
| Name | Type | Description | Default | Required |
| ----------------------------- | --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- |
| `kafka_uris` | `array(string)` | Address array (host:port) of Kafka server. | | yes |
| `instance` | `string` | The`instance`label for metrics, default is the hostname:port of the first kafka_uris. You must manually provide the instance value if there is more than one string in kafka_uris. | | no |
| `use_sasl` | `bool` | Connect using SASL/PLAIN. | | no |
| `use_sasl_handshake` | `bool` | Only set this to false if using a non-Kafka SASL proxy. | `true` | no |
| `sasl_username` | `string` | SASL user name. | | no |
| `sasl_password` | `string` | SASL user password. | | no |
| `sasl_mechanism` | `string` | The SASL SCRAM SHA algorithm sha256 or sha512 as mechanism. | | no |
| `sasl_disable_pafx_fast` | `bool` | Configure the Kerberos client to not use PA_FX_FAST. | | no |
| `use_tls` | `bool` | Connect using TLS. | | no |
| `tls_server_name` | `string` | Used to verify the hostname on the returned certificates unless tls.insecure-skip-tls-verify is given. If you don't provide the Kafka server name, the hostname is taken from the URL. | | no |
| `ca_file` | `string` | The optional certificate authority file for TLS client authentication. | | no |
| `cert_file` | `string` | The optional certificate file for TLS client authentication. | | no |
| `key_file` | `string` | The optional key file for TLS client authentication. | | no |
| `insecure_skip_verify` | `bool` | If set to true, the server's certificate will not be checked for validity. This makes your HTTPS connections insecure. | | no |
| `kafka_version` | `string` | Kafka broker version. | `2.0.0` | no |
| `use_zookeeper_lag` | `bool` | If set to true, use a group from zookeeper. | | no |
| `zookeeper_uris` | `array(string)` | Address array (hosts) of zookeeper server. | | no |
| `kafka_cluster_name` | `string` | Kafka cluster name. | | no |
| `metadata_refresh_interval` | `duration` | Metadata refresh interval. | `1m` | no |
| `gssapi_service_name` | `string` | Service name when using Kerberos Authorization | | no |
| `gssapi_kerberos_config_path` | `string` | Kerberos config path. | | no |
| `gssapi_realm` | `string` | Kerberos realm. | | no |
| `gssapi_key_tab_path` | `string` | Kerberos keytab file path. | | no |
| `gssapi_kerberos_auth_type` | `string` | Kerberos auth type. Either 'keytabAuth' or 'userAuth'. | | no |
| `offset_show_all` | `bool` | If true, the broker may auto-create topics that we requested which do not already exist. | `true` | no |
| `topic_workers` | `int` | Minimum number of topics to monitor. | `100` | no |
| `allow_concurrency` | `bool` | If set to true, all scrapes trigger Kafka operations. Otherwise, they will share results. WARNING: Disable this on large clusters. | `true` | no |
| `allow_auto_topic_creation` | `bool` | If true, the broker may auto-create topics that we requested which do not already exist. | | no |
| `max_offsets` | `int` | The maximum number of offsets to store in the interpolation table for a partition. | `1000` | no |
| `prune_interval_seconds` | `int` | Deprecated (no-op), use `metadata_refresh_interval` instead. | `30` | no |
| `topics_filter_regex` | `string` | Regex filter for topics to be monitored. | `.*` | no |
| `topics_exclude_regex` | `string` | Regex that determines which topics to exclude. | `^$` | no |
| `groups_filter_regex` | `string` | Regex filter for consumer groups to be monitored. | `.*` | no |
| `groups_exclude_regex` | `string` | Regex that determines which consumer groups to exclude. | `^$` | no |

## Exported fields

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ title: kafka_exporter_config
# kafka_exporter_config

The `kafka_exporter_config` block configures the `kafka_exporter`
integration, which is an embedded version of [`kafka_exporter`](https://github.com/davidmparrott/kafka_exporter).
integration, which is an embedded version of [`kafka_exporter`](https://github.com/grafana/kafka_exporter).
This allows for the collection of Kafka Lag metrics and exposing them as Prometheus metrics.

We strongly recommend that you configure a separate user for the Agent, and give it only the strictly mandatory
Expand Down Expand Up @@ -78,9 +78,15 @@ Full reference of options:
# The SASL SCRAM SHA algorithm sha256 or sha512 as mechanism
[sasl_mechanism: <string>]

# Configure the Kerberos client to not use PA_FX_FAST.
[sasl_disable_pafx_fast: <string>]

# Connect using TLS
[use_tls: <bool>]

# Used to verify the hostname on the returned certificates unless tls.insecure-skip-tls-verify is given. If you don't provide the Kafka server name, the hostname is taken from the URL.
[tls_server_name: <string>]

# The optional certificate authority file for TLS client authentication
[ca_file: <string>]

Expand Down Expand Up @@ -108,19 +114,49 @@ Full reference of options:
# Metadata refresh interval
[metadata_refresh_interval: <duration> | default = "1m"]

# Service name when using kerberos Auth.
[gssapi_service_name: <string>]

# Kerberos config path.
[gssapi_kerberos_config_path: <string>]

# Kerberos realm.
[gssapi_realm: <string>]

# Kerberos keytab file path.
[gssapi_key_tab_path: <string>]

# Kerberos auth type. Either 'keytabAuth' or 'userAuth'.
[gssapi_kerberos_auth_type: <string>]

# Whether show the offset/lag for all consumer group, otherwise, only show connected consumer groups.
[offset_show_all: <bool> | default = true]

# Minimum number of topics to monitor.
[topic_workers: <int> | default = 100]

# If true, all scrapes will trigger kafka operations otherwise, they will share results. WARN: This should be disabled on large clusters
[allow_concurrency: <bool> | default = true]

# If true, the broker may auto-create topics that we requested which do not already exist.
[allow_auto_topic_creation: <bool>]

# Maximum number of offsets to store in the interpolation table for a partition
[max_offsets: <int> | default = 1000]

# How frequently should the interpolation table be pruned, in seconds
# Deprecated (no-op), use metadata_refresh_interval instead.
[prune_interval_seconds: <int> | default = 30]

# Regex filter for topics to be monitored
[topics_filter_regex: <string> | default = ".*"]

# Regex that determines which topics to exclude.
[topics_exclude_regex: <string> | default = "^$"]

# Regex filter for consumer groups to be monitored
[groups_filter_regex: <string> | default = ".*"]

# Regex that determines which consumer groups to exclude.
[groups_exclude_regex: <string> | default = "^$"]

```
Loading
Loading