Skip to content

Commit

Permalink
Add receive command docs
Browse files Browse the repository at this point in the history
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
  • Loading branch information
kakkoyun committed Jun 5, 2020
1 parent 6f2c3b1 commit ebc4847
Show file tree
Hide file tree
Showing 4 changed files with 135 additions and 6 deletions.
8 changes: 4 additions & 4 deletions docs/components/query.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,10 @@ Thanos Querier essentially allows to aggregate and optionally deduplicate multip

Since for Querier "a backend" is anything that implements gRPC StoreAPI we can aggregate data from any number of the different storages like:

* Prometheus (see [Sidecar](sidecar.md))
* Object Storage (see [Store Gateway](store.md))
* Global alerting/recording rules evaluations (see [Ruler](rule.md))
* Metrics received from Prometheus remote write streams (see [Thanos Receiver](../proposals/201812_thanos-remote-receive.md))
* Prometheus (see [Sidecar](./sidecar.md))
* Object Storage (see [Store Gateway](./store.md))
* Global alerting/recording rules evaluations (see [Ruler](./rule.md))
* Metrics received from Prometheus remote write streams (see [Receiver](./receive.md))
* Another Querier (you can stack Queriers on top of each other)
* Non-Prometheus systems!
* e.g [OpenTSDB](../integrations.md#opentsdb)
Expand Down
129 changes: 129 additions & 0 deletions docs/components/receive.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
title: Receiver
type: docs
menu: components
---

# Receiver

The `thanos receive` command implements the [Prometheus Remote Write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). It builds on top of existing Prometheus servers and retains their usefulness while extending their functionality with long-term-storage, horizontal scalability, and downsampling. The [Thanos Sidecar](./sidecar.md) is not sufficient for this, as the system would always lag the block length behind (typically 2 hours), which would prevent the most common query pattern of Prometheus: real-time queries of very recent data.

This component is only recommended for uses for whom pushing is the only viable solution, for example, analytics use cases or cases where the data ingestion must be client-initiated, such as software as a service type environments.

This component is not recommended in order to achieve a global view of data of a single-tenant, for those cases the sidecar based approach with layered Thanos Queriers is recommended. Multi-tenancy may also be achievable if ingestion is not user-controlled, as then enforcing of labels, for example using the [prom-label-proxy](https://github.com/openshift/prom-label-proxy) (please thoroughly understand the mechanism if intending to employ this mechanism, as the wrong configuration could leak data).

Also, users are asked to note the [various pros and cons of pushing metrics](https://docs.google.com/document/d/1H47v7WfyKkSLMrR8_iku6u9VB73WrVzBHb2SB6dL9_g/edit#heading=h.2v27snv0lsur).

For more information please check out [initial design proposal](../proposals/201812_thanos-remote-receive.md).
For further information on tuning Prometheus Remote Write [see remote write tuning document](https://prometheus.io/docs/practices/remote_write/).

## Flags

[embedmd]:# (flags/receive.txt $)
```$
usage: thanos receive [<flags>]
Accept Prometheus remote write API requests and write to local tsdb
(EXPERIMENTAL, this may change drastically without notice)
Flags:
-h, --help Show context-sensitive help (also try
--help-long and --help-man).
--version Show application version.
--log.level=info Log filtering level.
--log.format=logfmt Log format to use. Possible options: logfmt or
json.
--tracing.config-file=<file-path>
Path to YAML file with tracing configuration.
See format details:
https://thanos.io/tracing.md/#configuration
--tracing.config=<content>
Alternative to 'tracing.config-file' flag
(lower priority). Content of YAML file with
tracing configuration. See format details:
https://thanos.io/tracing.md/#configuration
--http-address="0.0.0.0:10902"
Listen host:port for HTTP endpoints.
--http-grace-period=2m Time to wait after an interrupt received for
HTTP Server.
--grpc-address="0.0.0.0:10901"
Listen ip:port address for gRPC endpoints
(StoreAPI). Make sure this address is routable
from other components.
--grpc-grace-period=2m Time to wait after an interrupt received for
GRPC Server.
--grpc-server-tls-cert="" TLS Certificate for gRPC server, leave blank to
disable TLS
--grpc-server-tls-key="" TLS Key for the gRPC server, leave blank to
disable TLS
--grpc-server-tls-client-ca=""
TLS CA to verify clients against. If no client
CA is specified, there is no client
verification on server side. (tls.NoClientCert)
--remote-write.address="0.0.0.0:19291"
Address to listen on for remote write requests.
--remote-write.server-tls-cert=""
TLS Certificate for HTTP server, leave blank to
disable TLS
--remote-write.server-tls-key=""
TLS Key for the HTTP server, leave blank to
disable TLS
--remote-write.server-tls-client-ca=""
TLS CA to verify clients against. If no client
CA is specified, there is no client
verification on server side. (tls.NoClientCert)
--remote-write.client-tls-cert=""
TLS Certificates to use to identify this client
to the server
--remote-write.client-tls-key=""
TLS Key for the client's certificate
--remote-write.client-tls-ca=""
TLS CA Certificates to use to verify servers
--remote-write.client-server-name=""
Server name to verify the hostname on the
returned gRPC certificates. See
https://tools.ietf.org/html/rfc4366#section-3.1
--tsdb.path="./data" Data directory of TSDB.
--label=key="value" ... External labels to announce. This flag will be
removed in the future when handling multiple
tsdb instances is added.
--objstore.config-file=<file-path>
Path to YAML file that contains object store
configuration. See format details:
https://thanos.io/storage.md/#configuration
--objstore.config=<content>
Alternative to 'objstore.config-file' flag
(lower priority). Content of YAML file that
contains object store configuration. See format
details:
https://thanos.io/storage.md/#configuration
--tsdb.retention=15d How long to retain raw samples on local
storage. 0d - disables this retention
--receive.hashrings-file=<path>
Path to file that contains the hashring
configuration.
--receive.hashrings-file-refresh-interval=5m
Refresh interval to re-read the hashring
configuration file. (used as a fallback)
--receive.local-endpoint=RECEIVE.LOCAL-ENDPOINT
Endpoint of local receive node. Used to
identify the local node in the hashring
configuration.
--receive.tenant-header="THANOS-TENANT"
HTTP header to determine tenant for write
requests.
--receive.default-tenant-id="default-tenant"
Default tenant ID to use when none is provided
via a header.
--receive.tenant-label-name="tenant_id"
Label name through which the tenant will be
announced.
--receive.replica-header="THANOS-REPLICA"
HTTP header specifying the replica number of a
write request.
--receive.replication-factor=1
How many times to replicate incoming write
requests.
--tsdb.wal-compression Compress the tsdb WAL.
```
2 changes: 1 addition & 1 deletion docs/components/sidecar.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ In details:
* It implements Thanos' Store API on top of Prometheus' remote-read API. This allows [Queriers](./query.md) to treat Prometheus servers as yet another source of time series data without directly talking to its APIs.
* Optionally, the sidecar uploads TSDB blocks to an object storage bucket as Prometheus produces them every 2 hours. This allows Prometheus servers to be run with relatively low retention while their historic data is made durable and queryable via object storage.

NOTE: This still does NOT mean that Prometheus can be fully stateless, because if it crashes and restarts you will lose ~2 hours of metrics, so persistent disk for Prometheus is highly recommended. The closest to stateless you can get is using remote write (which Thanos experimentally supports, see [this](../proposals/201812_thanos-remote-receive.md). Remote write has other risks and consequences, and still if crashed you loose in positive case seconds of metrics data, so persistent disk is recommended in all cases.
NOTE: This still does NOT mean that Prometheus can be fully stateless, because if it crashes and restarts you will lose ~2 hours of metrics, so persistent disk for Prometheus is highly recommended. The closest to stateless you can get is using remote write (which Thanos experimentally supports, see [Receiver](./receive.md). Remote write has other risks and consequences, and still if crashed you loose in positive case seconds of metrics data, so persistent disk is recommended in all cases.

* Optionally Thanos sidecar is able to watch Prometheus rules and configuration, decompress and substitute environment variables if needed and ping Prometheus to reload them. Read more about this in [here](./sidecar.md#reloader-configuration)

Expand Down
2 changes: 1 addition & 1 deletion scripts/genflagdocs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ CHECK=${1:-}

# Auto update flags.

commands=("compact" "query" "rule" "sidecar" "store" "tools")
commands=("compact" "query" "rule" "sidecar" "store" "receive" "tools")
for x in "${commands[@]}"; do
${THANOS_BIN} "${x}" --help &> "docs/components/flags/${x}.txt"
done
Expand Down

0 comments on commit ebc4847

Please sign in to comment.