Metrictank comes with a bunch of helper tools.
Here is an overview of them all.
This file is generated by tools-to-doc
mt-aggs-explain
Usage:
mt-aggs-explain [flags] [config-file]
(config file defaults to /etc/metrictank/storage-aggregation.conf)
Flags:
-metric string
specify a metric name to see which aggregation rule it matches
-version
print version string
mt-explain
Explains the execution plan for a given query / set of targets
Usage:
mt-explain
Example:
mt-explain -from -24h -to now -mdp 1000 "movingAverage(sumSeries(foo.bar), '2min')" "alias(averageSeries(foo.*), 'foo-avg')"
mt-gateway
Provides an HTTP gateway for interacting with metrictank, including metrics ingestion
Usage:
mt-gateway [flags]
Flags:
-addr string
http service address (default ":80")
-default-org-id int
default org ID to send to downstream services if none is provided (default -1)
-discard-prefixes string
discard data points starting with one of the given prefixes separated by | (may be given multiple times, once per topic, as a comma-separated list)
-graphite-url string
graphite-api address (default "http://localhost:8080")
-importer-url string
mt-whisper-importer-writer address
-kafka-tcp-addr string
kafka tcp address(es) for metrics, in csv host[:port] format (default "localhost:9092")
-kafka-version string
Kafka version in semver format. All brokers must be this version or newer. (default "0.10.0.0")
-metrics-flush-freq duration
The best-effort frequency of flushes to kafka (default 50ms)
-metrics-kafka-comp string
compression: none|gzip|snappy (default "snappy")
-metrics-max-messages int
The maximum number of messages the producer will send in a single request (default 5000)
-metrics-partition-scheme string
method used for partitioning metrics. (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv) (may be given multiple times, once per topic, as a comma-separated list) (default "bySeries")
-metrics-publish
enable metric publishing
-metrics-topic string
topic for metrics (may be given multiple times as a comma-separated list) (default "mdm")
-metrictank-url string
metrictank address (default "http://localhost:6060")
-only-org-id value
restrict publishing data belonging to org id; 0 means no restriction (may be given multiple times, once per topic, as a comma-separated list)
-schemas-file string
path to carbon storage-schemas.conf file (default "/etc/gw/storage-schemas.conf")
-v2
enable optimized MetricPoint payload (default true)
-v2-clear-interval duration
interval after which we always resend a full MetricData (default 1h0m0s)
-v2-org
encode org-id in messages (default true)
-version
print version string
mt-index-cat
Retrieves a metrictank index and dumps it in the requested format
In particular, the vegeta outputs are handy to pipe requests for given series into the vegeta http benchmark tool
Usage:
mt-index-cat [global config flags] <idxtype> [idx config flags] output
global config flags:
-addr string
graphite/metrictank address (default "http://localhost:6060")
-from string
for vegeta outputs, will generate requests for data starting from now minus... eg '30min', '5h', '14d', etc. or a unix timestamp (default "30min")
-limit int
only show this many metrics. use 0 to disable
-max-stale string
exclude series that have not been seen for this much time (compared against LastUpdate). use 0 to disable (default "6h30min")
-min-stale string
exclude series that have been seen in this much time (compared against LastUpdate). use 0 to disable (default "0")
-partitions string
only show metrics from the comma separated list of partitions or * for all (default "*")
-prefix string
only show metrics that have this prefix
-regex string
only show metrics that match this regex
-substr string
only show metrics that have this substring
-suffix string
only show metrics that have this suffix
-tags string
tag filter. empty (default), 'some', 'none', 'valid', or 'invalid'
-verbose
print stats to stderr
tags filter:
'' no filtering based on tags
'none' only show metrics that have no tags
'some' only show metrics that have one or more tags
'valid' only show metrics whose tags (if any) are valid
'invalid' only show metrics that have one or more invalid tags
idxtype: only 'cass' supported for now
cass config flags:
-archive-table string
Cassandra table to archive metricDefinitions in. (default "metric_idx_archive")
-auth
enable cassandra user authentication
-ca-path string
cassandra CA certficate path when using SSL (default "/etc/metrictank/ca.pem")
-connection-check-interval duration
interval at which to perform a connection check to cassandra, set to 0 to disable. (default 5s)
-connection-check-timeout duration
maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval. (default 30s)
-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-create-keyspace
enable the creation of the index keyspace and tables, only one node needs this (default true)
-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-enabled
(default true)
-host-verification
host (hostname and server cert) verification when using SSL (default true)
-hosts string
comma separated list of cassandra addresses in host:port form (default "localhost:9042")
-init-load-concurrency int
Number of partitions to load concurrently on startup. (default 1)
-keyspace string
Cassandra keyspace to store metricDefinitions in. (default "metrictank")
-meta-record-batch-table string
Cassandra table to store meta data of meta record batches. (default "meta_record_batches")
-meta-record-poll-interval duration
Interval at which to poll store for meta record updates. (default 10s)
-meta-record-prune-age duration
The minimum age a batch of meta records must have to be pruned. (default 72h0m0s)
-meta-record-prune-interval duration
Interval at which meta records of old batches get pruned. (default 24h0m0s)
-meta-record-table string
Cassandra table to store meta records. (default "meta_records")
-num-conns int
number of concurrent connections to cassandra (default 10)
-password string
password for authentication (default "cassandra")
-protocol-version int
cql protocol version to use (default 4)
-prune-interval duration
Interval at which the index should be checked for stale series. (default 3h0m0s)
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-ssl
enable SSL connection to cassandra
-table string
Cassandra table to store metricDefinitions in. (default "metric_idx")
-timeout duration
cassandra request timeout (default 1s)
-update-cassandra-index
synchronize index changes to cassandra. not all your nodes need to do this. (default true)
-update-interval duration
frequency at which we should update the metricDef lastUpdate field, use 0s for instant updates (default 3h0m0s)
-username string
username for authentication (default "cassandra")
-write-queue-size int
Max number of metricDefs allowed to be unwritten to cassandra (default 100000)
output:
* presets: dump|list|vegeta-render|vegeta-render-patterns
* templates, which may contain:
- fields, e.g. '{{.Id}} {{.OrgId}} {{.Name}} {{.Interval}} {{.Unit}} {{.Mtype}} {{.Tags}} {{.LastUpdate}} {{.Partition}}'
- methods, e.g. '{{.NameWithTags}}' (works basically the same as a field)
- processing functions:
pattern: transforms a graphite.style.metric.name into a pattern with wildcards inserted
an operation is randomly selected between: replacing a node with a wildcard, replacing a character with a wildcard, and passthrough
patternCustom: transforms a graphite.style.metric.name into a pattern with wildcards inserted according to rules provided:
patternCustom <chance> <operation>[ <chance> <operation>...]
the chances need to add up to 100
operation is one of:
* pass (passthrough)
* <digit>rcnw (replace a randomly chosen sequence of <digit (0-9)> consecutive nodes with wildcards
* <digit>rccw (replace a randomly chosen sequence of <digit (0-9)> consecutive characters with wildcards
example: {{.Name | patternCustom 15 "pass" 40 "1rcnw" 15 "2rcnw" 10 "3rcnw" 10 "3rccw" 10 "2rccw"}}\n
age: subtracts the passed integer (typically .LastUpdate) from the query time
roundDuration: formats an integer-seconds duration using aggressive rounding. for the purpose of getting an idea of overal metrics age
EXAMPLES:
mt-index-cat -from 60min cass -hosts cassandra:9042 list
mt-index-cat -from 60min cass -hosts cassandra:9042 'sumSeries({{.Name | pattern}})'
mt-index-cat -from 60min cass -hosts cassandra:9042 'GET http://localhost:6060/render?target=sumSeries({{.Name | pattern}})&from=-6h\nX-Org-Id: 1\n\n'
mt-index-cat cass -hosts cassandra:9042 -timeout 60s '{{.LastUpdate | age | roundDuration}}\n' | sort | uniq -c
mt-index-cat cass -hosts localhost:9042 -schema-file ../../scripts/config/schema-idx-cassandra.toml '{{.Name | patternCustom 15 "pass" 40 "1rcnw" 15 "2rcnw" 10 "3rcnw" 10 "3rccw" 10 "2rccw"}}\n'
mt-index-migrate
Migrate metric index from one cassandra keyspace to another.
This tool can be used for moving data to a different keyspace or cassandra cluster
or for resetting partition information when the number of partitions being used has changed.
Flags:
-dry-run
run in dry-run mode. No changes will be made. (default true)
-dst-cass-addr string
Address of cassandra host to migrate to. (default "localhost")
-dst-keyspace string
Cassandra keyspace in use on destination. (default "raintank")
-dst-table string
Cassandra table name in use on destination. (default "metric_idx")
-log-level string
log level. panic|fatal|error|warning|info|debug (default "info")
-num-partitions int
number of partitions in cluster (default 1)
-partition-scheme string
method used for partitioning metrics. (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv) (default "byOrg")
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-src-cass-addr string
Address of cassandra host to migrate from. (default "localhost")
-src-keyspace string
Cassandra keyspace in use on source. (default "raintank")
-src-table string
Cassandra table name in use on source. (default "metric_idx")
mt-index-prune
Retrieves a metrictank index and moves all deprecated entries into an archive table
Usage:
mt-index-prune [global config flags] <idxtype> [idx config flags]
global config flags:
-index-rules-file string
name of file which defines the max-stale times (default "/etc/metrictank/index-rules.conf")
-no-dry-run
do not only plan and print what to do, but also execute it
-partition-from int
the partition to start at
-partition-to int
prune all partitions up to this one (exclusive). If unset, only the partition defined with "--partition-from" gets pruned (default -1)
-verbose
print every metric name that gets archived
idxtype: only 'cass' supported for now
cass config flags:
-archive-table string
Cassandra table to archive metricDefinitions in. (default "metric_idx_archive")
-auth
enable cassandra user authentication
-ca-path string
cassandra CA certficate path when using SSL (default "/etc/metrictank/ca.pem")
-connection-check-interval duration
interval at which to perform a connection check to cassandra, set to 0 to disable. (default 5s)
-connection-check-timeout duration
maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval. (default 30s)
-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-create-keyspace
enable the creation of the index keyspace and tables, only one node needs this (default true)
-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-enabled
(default true)
-host-verification
host (hostname and server cert) verification when using SSL (default true)
-hosts string
comma separated list of cassandra addresses in host:port form (default "localhost:9042")
-init-load-concurrency int
Number of partitions to load concurrently on startup. (default 1)
-keyspace string
Cassandra keyspace to store metricDefinitions in. (default "metrictank")
-meta-record-batch-table string
Cassandra table to store meta data of meta record batches. (default "meta_record_batches")
-meta-record-poll-interval duration
Interval at which to poll store for meta record updates. (default 10s)
-meta-record-prune-age duration
The minimum age a batch of meta records must have to be pruned. (default 72h0m0s)
-meta-record-prune-interval duration
Interval at which meta records of old batches get pruned. (default 24h0m0s)
-meta-record-table string
Cassandra table to store meta records. (default "meta_records")
-num-conns int
number of concurrent connections to cassandra (default 10)
-password string
password for authentication (default "cassandra")
-protocol-version int
cql protocol version to use (default 4)
-prune-interval duration
Interval at which the index should be checked for stale series. (default 3h0m0s)
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-ssl
enable SSL connection to cassandra
-table string
Cassandra table to store metricDefinitions in. (default "metric_idx")
-timeout duration
cassandra request timeout (default 1s)
-update-cassandra-index
synchronize index changes to cassandra. not all your nodes need to do this. (default true)
-update-interval duration
frequency at which we should update the metricDef lastUpdate field, use 0s for instant updates (default 3h0m0s)
-username string
username for authentication (default "cassandra")
-write-queue-size int
Max number of metricDefs allowed to be unwritten to cassandra (default 100000)
EXAMPLES:
mt-index-prune --verbose --partition-from 0 --partition-to 8 cass -hosts cassandra:9042
mt-kafka-mdm-sniff
Inspects what's flowing through kafka (in mdm format) and reports it to you
Flags:
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-format-md string
template to render MetricData with (default "{{.Part}} {{.OrgId}} {{.Id}} {{.Name}} {{.Interval}} {{.Value}} {{.Time}} {{.Unit}} {{.Mtype}} {{.Tags}}")
-format-point string
template to render MetricPoint data with (default "{{.Part}} {{.MKey}} {{.Value}} {{.Time}}")
-invalid
only show metrics that are invalid
-prefix string
only show metrics that have this prefix
-substr string
only show metrics that have this substring
you can also use functions in templates:
date: formats a unix timestamp as a date
example: mt-kafka-mdm-sniff -format-point '{{.Time | date}}'
mt-kafka-mdm-sniff-out-of-order
Inspects what's flowing through kafka (in mdm format) and reports out of order data (does not take into account reorder buffer)
# Mechanism
* it sniffs points being added on a per-series (metric Id) level
* for every series, tracks the last 'correct' point. E.g. a point that was able to be added to the series because its timestamp is higher than any previous timestamp
* if for any series, a point comes in with a timestamp equal or lower than the last point correct point - which metrictank would not add unless it falls within the reorder buffer - it triggers an event for this out-of-order point
every event is printed using the specified, respective format based on the message format
# Event formatting
Uses standard golang templating. E.g. {{field}} with these available fields:
NumBad - number of failed points since last successful add
DeltaTime - delta between Head and Bad time properties in seconds (point timestamps)
DeltaSeen - delta between Head and Bad seen time in seconds (consumed from kafka)
.Head.* - head is last successfully added message
.Bad.* - Bad is the current point that could not be added (assuming no re-order buffer)
under Head and Bad, the following subfields are available:
Part (partition) and Seen (when the msg was consumed from kafka)
for MetricData, prefix these with Md. : Time OrgId Id Name Metric Interval Value Unit Mtype Tags
for MetricPoint, prefix these with Mp. : Time MKey Value
Flags:
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-do-unknown-mp
process MetricPoint messages for which no MetricData messages have been seen, and hence for which we can't apply prefix/substr filter (default true)
-format string
template to render event with (default "{{.Bad.Md.Id}} {{.Bad.Md.Name}} {{.Bad.Mp.MKey}} {{.DeltaTime}} {{.DeltaSeen}} {{.NumBad}}")
-prefix string
only show metrics with a name that has this prefix
-substr string
only show metrics with a name that has this substring
mt-kafka-persist-sniff
Print what's flowing through kafka metric persist topic
Flags:
-backlog-process-timeout string
Maximum time backlog processing can block during metrictank startup. Setting to a low value may result in data loss (default "60s")
-brokers string
tcp address for kafka (may be given multiple times as comma separated list) (default "kafka:9092")
-enabled
-kafka-version string
Kafka version in semver format. All brokers must be this version or newer. (default "2.0.0")
-offset string
Set the offset to start consuming from. Can be oldest, newest or a time duration (default "newest")
-partitions string
kafka partitions to consume. use '*' or a comma separated list of id's. This should match the partitions used for kafka-mdm-in (default "*")
-topic string
kafka topic (default "metricpersist")
mt-keygen
mt-keygen gives you the MKey for a specific MetricDefinition
It fills a temp file with a template MetricDefinition
It launches vim
You fill in the important details - name / interval / tags /...
It prints the MKey
-version
print version string
mt-schemas-explain
Usage:
mt-schemas-explain [flags] [config-file]
(config file defaults to /etc/metrictank/storage-schemas.conf)
Flags:
-int int
specify an interval to apply interval-based matching in addition to metric matching (e.g. to simulate kafka-mdm input)
-metric string
specify a metric name to see which schema it matches
-version
print version string
-window-factor int
size of compaction window relative to TTL (default 20)
mt-split-metrics-by-ttl [flags] ttl [ttl...]
Creates schema of metric tables split by TTLs and
assists in migrating the data to new tables.
Flags:
-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-cassandra-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-cql-protocol-version int
cql protocol version to use (default 4)
mt-store-cat
Retrieves timeseries data from the cassandra store. Either raw or with minimal processing
Usage:
mt-store-cat [flags] tables
mt-store-cat [flags] <table-selector> <metric-selector> <format>
table-selector: '*' or name of a table. e.g. 'metric_128'
metric-selector: '*' or an id (of raw or aggregated series) or prefix:<prefix> or substr:<substring> or glob:<pattern>
format:
- points
- point-summary
- chunk-summary (shows TTL's, optionally bucketed. See groupTTL flag)
- chunk-csv (for importing into cassandra)
EXAMPLES:
mt-store-cat -cassandra-keyspace metrictank -from='-1min' '*' '1.77c8c77afa22b67ef5b700c2a2b88d5f' points
mt-store-cat -cassandra-keyspace metrictank -from='-1month' '*' 'prefix:fake' point-summary
mt-store-cat -cassandra-keyspace metrictank '*' 'prefix:fake' chunk-summary
mt-store-cat -groupTTL h -cassandra-keyspace metrictank 'metric_512' '1.37cf8e3731ee4c79063c1d55280d1bbe' chunk-summary
Flags:
-archive string
archive to fetch for given metric. e.g. 'sum_1800'
-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-create-keyspace
enable the creation of the mdata keyspace and tables, only one node needs this (default true)
-cassandra-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-cassandra-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-omit-read-timeout string
if a read is older than this, it will directly be omitted without executing (default "60s")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-read-concurrency int
max number of concurrent reads to cassandra. (default 20)
-cassandra-read-queue-size int
max number of outstanding reads before reads will be dropped. This is important if you run queries that result in many reads in parallel. (default 200000)
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-store-cassandra.toml")
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-cql-protocol-version int
cql protocol version to use (default 4)
-fix int
fix data to this interval like metrictank does quantization. only for points and point-summary format
-from string
get data from (inclusive). only for points and point-summary format (default "-24h")
-groupTTL string
group chunks in TTL buckets: s (second. means unbucketed), m (minute), h (hour) or d (day). only for chunk-summary format (default "d")
-index-archive-table string
Cassandra table to archive metricDefinitions in. (default "metric_idx_archive")
-index-init-load-concurrency int
Number of partitions to load concurrently on startup. (default 1)
-index-schema-file string
File containing the needed index schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-index-table string
Cassandra table to store metricDefinitions in. (default "metric_idx")
-index-timeout duration
cassandra request timeout (default 1s)
-print-ts
print time stamps instead of formatted dates. only for points and point-summary format
-time-zone string
time-zone to use for interpreting from/to when needed. (check your config) (default "local")
-to string
get data until (exclusive). only for points and point-summary format (default "now")
-verbose
verbose (print stuff about the request)
-version
print version string
-window-factor int
size of compaction window relative to TTL (default 20)
Notes:
* Using `*` as metric-selector may bring down your cassandra. Especially chunk-summary ignores from/to and queries all data.
With great power comes great responsibility
* points that are not in the `from <= ts < to` range, are prefixed with `-`. In range has prefix of '>`
* When using chunk-summary, if there's data that should have been expired by cassandra, but for some reason didn't, we won't see or report it
* Doesn't automatically return data for aggregated series. It's up to you to query for an AMKey (id_<rollup>_<span>) when appropriate
* (rollup is one of sum, cnt, lst, max, min and span is a number in seconds)
mt-store-cp [flags] table-in [table-out]
Copies data in Cassandra to use another table (and possibly another cluster).
It is up to you to assure table-out exists before running this tool
This tool is EXPERIMENTAL and needs doublechecking whether data is successfully written to Cassandra
see https://github.com/grafana/metrictank/pull/909 for details
Please report good or bad experiences in the above ticket or in a new one
Flags:
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-concurrency int
max number of concurrent reads to cassandra. (default 20)
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-disable-host-lookup
disable host lookup (useful if going through proxy)
-cassandra-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-cql-protocol-version int
cql protocol version to use (default 4)
-dest-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-end-timestamp int
timestamp at which to stop, defaults to int max (default 2147483647)
-end-token int
token to stop at (inclusive), defaults to math.MaxInt64 (default 9223372036854775807)
-idx-table string
idx table in cassandra (default "metric_idx")
-max-batch-size int
max number of queries per batch (default 10)
-partitions string
process ids for these partitions (comma separated list of partition numbers or '*' for all) (default "*")
-progress-rows int
number of rows between progress output (default 1000000)
-source-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-start-timestamp int
timestamp at which to start, defaults to 0
-start-token int
token to start at (inclusive), defaults to math.MinInt64 (default -9223372036854775808)
-threads int
number of workers to use to process data (default 1)
-verbose
show every record being processed
mt-update-ttl [flags] ttl-old ttl-new
Adjusts the data in Cassandra to use a new TTL value. The TTL is applied counting from the timestamp of the data
Automatically resolves the corresponding tables based on ttl value. If the table stays the same, will update in place. Otherwise will copy to the new table, not touching the input data
Unless you disable create-keyspace, tables are created as needed
Flags:
-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-concurrency int
number of concurrent connections to cassandra. (default 20)
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-cql-protocol-version int
cql protocol version to use (default 4)
-create-keyspace
enable the creation of the keyspace and tables (default true)
-end-timestamp int
timestamp at which to stop, defaults to int max (default 2147483647)
-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-store-cassandra.toml")
-start-timestamp int
timestamp at which to start, defaults to 0
-status-every int
print status every x keys (default 100000)
-threads int
number of workers to use to process data (default 10)
-verbose
show every record being processed
-window-factor int
size of compaction window relative to TTL (default 20)
mt-view-boundaries
Shows boundaries of rows in cassandra and of spans of specified size.
to see UTC times, just prefix command with TZ=UTC
-span string
see boundaries for chunks of this span
-version
print version string
Usage of ./mt-whisper-importer-reader:
-dst-schemas string
The filename of the output schemas definition file
-http-auth string
The credentials used to authenticate in the format "user:password"
-http-endpoint string
The http endpoint to send the data to (default "http://127.0.0.1:8080/metrics/import")
-import-from uint
Only import starting from the specified timestamp
-import-until uint
Only import up to, but not including, the specified timestamp (default 4294967295)
-insecure-ssl
Disables ssl certificate verification
-name-filter string
A regex pattern to be applied to all metric names, only matching ones will be imported
-name-prefix string
Prefix to prepend before every metric name, should include the '.' if necessary
-position-file string
file to store position and load position from
-threads int
Number of workers threads to process and convert .wsp files (default 10)
-verbose
More detailed logging
-whisper-directory string
The directory that contains the whisper file structure (default "/opt/graphite/storage/whisper")
-write-unfinished-chunks
Defines if chunks that have not completed their chunk span should be written
Usage of ./mt-whisper-importer-writer:
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-exit-on-error
Exit with a message when there's an error
-http-endpoint string
The http endpoint to listen on (default "0.0.0.0:8080")
-log-level string
log level. panic|fatal|error|warning|info|debug (default "info")
-num-partitions int
Number of Partitions (default 1)
-partition-scheme string
method used for partitioning metrics. This should match the settings of tsdb-gw. (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv) (default "bySeries")
-uri-path string
the URI on which we expect chunks to get posted (default "/metrics/import")