Skip to content

Commit

Permalink
rebase prep
Browse files Browse the repository at this point in the history
Signed-off-by: Richard Elling <Richard.Elling@RichardElling.com>
  • Loading branch information
richardelling committed Oct 4, 2020
1 parent f3b1d31 commit 6791c0c
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 20 deletions.
32 changes: 16 additions & 16 deletions cmd/zpool_influxdb/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Influxdb Metrics for ZFS Pools
The _zpool_influxdb_ program produces
The _zpool_influxdb_ program produces
[influxdb](https://github.com/influxdata/influxdb) line protocol
compatible metrics from zpools. In the UNIX tradition, _zpool_influxdb_
does one thing: read statistics from a pool and print them to
stdout. In many ways, this is a metrics-friendly output of
stdout. In many ways, this is a metrics-friendly output of
statistics normally observed via the `zpool` command.

## Usage
Expand All @@ -26,7 +26,7 @@ If no poolname is specified, then all pools are sampled.
#### Histogram Bucket Values
The histogram data collected by ZFS is stored as independent bucket values.
This works well out-of-the-box with an influxdb data source and grafana's
heatmap visualization. The influxdb query for a grafana heatmap
heatmap visualization. The influxdb query for a grafana heatmap
visualization looks like:
```
field(disk_read) last() non_negative_derivative(1s)
Expand Down Expand Up @@ -116,11 +116,11 @@ The ZFS I/O (ZIO) scheduler uses five queues to schedule I/Os to each vdev.
These queues are further divided into active and pending states.
An I/O is pending prior to being issued to the vdev. An active
I/O has been issued to the vdev. The scheduler and its tunable
parameters are described at the
parameters are described at the
[ZFS documentation for ZIO Scheduler]
(https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/ZIO%20Scheduler.html)
The ZIO scheduler reports the queue depths as gauges where the value
represents an instantaneous snapshot of the queue depth at
The ZIO scheduler reports the queue depths as gauges where the value
represents an instantaneous snapshot of the queue depth at
the sample time. Therefore, it is not unusual to see all zeroes
for an idle pool.

Expand Down Expand Up @@ -190,7 +190,7 @@ The histogram fields show cumulative values from lowest to highest.
The largest bucket is tagged "le=+Inf", representing the total count
of I/Os by type and vdev.

Note: trim I/Os can be larger than 16MiB, but the larger sizes are
Note: trim I/Os can be larger than 16MiB, but the larger sizes are
accounted in the 16MiB bucket.

#### zpool_io_size Histogram Tags
Expand Down Expand Up @@ -218,16 +218,16 @@ accounted in the 16MiB bucket.
| trim_write_agg | blocks | aggregated trim (aka unmap) writes |

#### About unsigned integers
Telegraf v1.6.2 and later support unsigned 64-bit integers which more
Telegraf v1.6.2 and later support unsigned 64-bit integers which more
closely matches the uint64_t values used by ZFS. By default, zpool_influxdb
uses ZFS' uint64_t values and influxdb line protocol unsigned integer type.
If you are using old telegraf or influxdb where unsigned integers are not
available, use the `--signed-int` option.

## Using _zpool_influxdb_

The simplest method is to use the execd input agent in telegraf. For older
versions of telegraf which lack execd, the exec input agent can be used.
The simplest method is to use the execd input agent in telegraf. For older
versions of telegraf which lack execd, the exec input agent can be used.
For convenience, one of the sample config files below can be placed in the
telegraf config-directory (often /etc/telegraf/telegraf.d). Telegraf can
be restarted to read the config-directory files.
Expand Down Expand Up @@ -269,26 +269,26 @@ be restarted to read the config-directory files.
```

## Caveat Emptor
* Like the _zpool_ command, _zpool_influxdb_ takes a reader
* Like the _zpool_ command, _zpool_influxdb_ takes a reader
lock on spa_config for each imported pool. If this lock blocks,
then the command will also block indefinitely and might be
unkillable. This is not a normal condition, but can occur if
there are bugs in the kernel modules.
unkillable. This is not a normal condition, but can occur if
there are bugs in the kernel modules.
For this reason, care should be taken:
* avoid spawning many of these commands hoping that one might
* avoid spawning many of these commands hoping that one might
finish
* avoid frequent updates or short sample time
intervals, because the locks can interfere with the performance
of other instances of _zpool_ or _zpool_influxdb_

## Other collectors
There are a few other collectors for zpool statistics roaming around
the Internet. Many attempt to screen-scrape `zpool` output in various
the Internet. Many attempt to screen-scrape `zpool` output in various
ways. The screen-scrape method works poorly for `zpool` output because
of its human-friendly nature. Also, they suffer from the same caveats
as this implementation. This implementation is optimized for directly
collecting the metrics and is much more efficient than the screen-scrapers.

## Feedback Encouraged
Pull requests and issues are greatly appreciated at
Pull requests and issues are greatly appreciated at
https://github.com/openzfs/zfs
4 changes: 0 additions & 4 deletions tests/runfiles/common.run
Original file line number Diff line number Diff line change
Expand Up @@ -898,7 +898,3 @@ tests = ['log_spacemap_import_logs']
pre =
post =
tags = ['functional', 'log_spacemap']

[tests/functional/zpool_influxdb]
tests = 'zpool_influxdb'
tags = ['functional', 'metrics']

0 comments on commit 6791c0c

Please sign in to comment.