Heapster can store data into different backends (sinks). These are specified on the command line
via the --sink
flag. The flag takes an argument of the form PREFIX:CONFIG[?OPTIONS]
.
Options (optional!) are specified as URL query parameters, separated by &
as normal.
This allows each source to have custom configuration passed to it without needing to
continually add new flags to Heapster as new sinks are added. This also means
heapster can store data into multiple sinks at once.
This sinks writes all data to the standard output which is particularly useful for debugging.
--sink=log
This sink supports both monitoring metrics and events. Supports InfluxDB versions v0.9 and above To use the InfluxDB sink add the following flag:
--sink=influxdb:<INFLUXDB_URL>[?<INFLUXDB_OPTIONS>]
If you're running Heapster in a Kubernetes cluster with the default InfluxDB + Grafana setup you can use the following flag:
--sink=influxdb:http://monitoring-influxdb:80/
The following options are available:
user
- InfluxDB username (default:root
)pw
- InfluxDB password (default:root
)db
- InfluxDB Database name (default:k8s
)retention
- Duration of the default InfluxDB retention policy, e.g.4h
or7d
(default:0
meaning infinite)secure
- Connect securely to InfluxDB (default:false
)insecuressl
- Ignore SSL certificate validity (default:false
)withfields
- Use InfluxDB fields (default:false
)
This sink supports monitoring metrics only. To use the GCM sink add the following flag:
--sink=gcm
Note: This sink works only on a Google Compute Enginer VM as of now
GCM has one option - metrics
that can be set to:
- all - the sink exports all metrics
- autoscaling - the sink exports only autoscaling-related metrics
This sink supports events only. To use the InfluxDB sink add the following flag:
--sink=gcl
Notes:
- This sink works only on a Google Compute Enginer VM as of now
- GCE instance must have “https://www.googleapis.com/auth/logging.write” auth scope
- GCE instance must have Logging API enabled for the project in Google Developer Console
- GCL Logs are accessible via:
https://console.developers.google.com/project/<project_ID>/logs?service=custom.googleapis.com
- Where
project_ID
is the project ID of the Google Cloud Platform project ID. - Select
kubernetes.io/events
from theAll logs
drop down menu.
This sink supports monitoring metrics only. To use the Hawkular-Metrics sink add the following flag:
--sink=hawkular:<HAWKULAR_SERVER_URL>[?<OPTIONS>]
If HAWKULAR_SERVER_URL
includes any path, the default hawkular/metrics
is overridden. To use SSL, the HAWKULAR_SERVER_URL
has to start with https
The following options are available:
tenant
- Hawkular-Metrics tenantId (default:heapster
)labelToTenant
- Hawkular-Metrics uses given label's value as tenant value when storing datauseServiceAccount
- Sink will use the service account token to authorize to Hawkular-Metrics (requires OpenShift)insecure
- SSL connection will not verify the certificatescaCert
- A path to the CA Certificate file that will be used in the connectionauth
- Kubernetes authentication file that will be used for constructing the TLSConfiguser
- Username to connect to the Hawkular-Metrics serverpass
- Password to connect to the Hawkular-Metrics serverfilter
- Allows bypassing the store of matching metrics, any number offilter
parameters can be given with a syntax offilter=operation(param)
. Supported operations and their params:label
- The syntax islabel(labelName:regexp)
wherelabelName
is 1:1 match andregexp
to use for matching is given after:
delimitername
- The syntax isname(regexp)
where MetricName is matched (such ascpu/usage
) with aregexp
filter
batchSize
- How many metrics are sent in each request to Hawkular-Metrics (default is 1000)concurrencyLimit
- How many concurrent requests are used to send data to the Hawkular-Metrics (default is 5)
A combination of insecure
/ caCert
/ auth
is not supported, only a single of these parameters is allowed at once. Also, combination of useServiceAccount
and user
+ pass
is not supported. To increase the performance of Hawkular sink in case of multiple instances of Hawkular-Metrics (such as scaled scenario in OpenShift) modify the parameters of batchSize and concurrencyLimit to balance the load on Hawkular-Metrics instances.
This sink supports monitoring metrics and events. To use the opentsdb sink add the following flag:
--sink=opentsdb:<OPENTSDB_SERVER_URL>
Currently, accessing opentsdb via its rest apis doesn't need any authentication, so you can enable opentsdb sink like this:
--sink=opentsdb:http://192.168.1.8:4242
This sink supports monitoring metrics only. To use the Monasca sink add the following flag:
--sink=monasca:[?<OPTIONS>]
The available options are listed below, and some of them are mandatory. You need to provide access to the Identity service of OpenStack (keystone).
Currently, only authorization through username
/ userID
+ password
/ APIKey
is supported. If the agent access (for sending metrics) to Monasca is restricted to a role, please specify the corresponding tenant-id
for automatic scoped authorization.
The Monasca sink is then created with either the provided Monasca API Server URL, or the URL is discovered automatically if none is provided by the user.
The following options are available:
user-id
- ID of the OpenStack userusername
- Name of the OpenStack usertenant-id
- ID of the OpenStack tenant (project)keystone-url
- URL to the Keystone identity service (mandatory). Must be a v3 server (required by Monasca)password
- Password of the OpenStack userapi-key
- API-Key for the OpenStack userdomain-id
- ID of the OpenStack user's domaindomain-name
- Name of the OpenStack user's domainmonasca-url
- URL of the Monasca API server (optional: the sink will attempt to discover the service if not provided)
This sink supports monitoring metrics only. To use the kafka sink add the following flag:
--sink="kafka:<?<OPTIONS>>"
Normally, kafka server has multi brokers, so brokers' list need be configured for producer. So, we provide kafka brokers' list and topics about timeseries & topic in url's query string. Options can be set in query string, like this:
brokers
- Kafka's brokers' list.timeseriestopic
- Kafka's topic for timeseries. Default value :heapster-metrics
eventstopic
- Kafka's topic for events.Default value :heapster-events
For example,
--sink="kafka:?brokers=localhost:9092&brokers=localhost:9093×eriestopic=testseries&eventstopic=testtopic"
This sink supports metrics only. To use the reimann sink add the following flag:
--sink="riemann:<RIEMANN_SERVER_URL>[?<OPTIONS>]"
The following options are available:
ttl
- TTL for writes to Riemann. Default:60 seconds
state
- FIXME. Default:""
tags
- FIXME. Default.none
storeEvents
- Control storage of events. Default:true
This sink supports monitoring metrics and events. To use the ElasticSearch sink add the following flag:
--sink=elasticsearch:<ES_SERVER_URL>[?<OPTIONS>]
Normally an ElasticSearch cluster has multiple nodes or a proxy, so these need
to be configured for the ElasticSearch sink. To do this, you can set
ES_SERVER_URL
to a dummy value, and use the ?nodes=
query value for each
additional node in the cluster. For example:
--sink=elasticsearch:?nodes=http://foo.com:9200&nodes=http://bar.com:9200
(*) Notice that using the ?nodes
notation will override the ES_SERVER_URL
Besides this, the following options can be set in query string:
(*) Note that the keys are case sensitive
index
- the index for metrics and events. The default isheapster
esUserName
- the username if authentication is enabledesUserSecret
- the password if authentication is enabledmaxRetries
- the number of retries that the Elastic client will perform for a single request after before giving up and return an error. It is0
by default, so retry is disabled by default.healthCheck
- specifies if healthchecks are enabled by default. It is enabled by default. To disable, provide a negative boolean value like0
orfalse
.sniff
- specifies if the sniffer is enabled by default. It is enabled by default. To disable, provide a negative boolean value like0
orfalse
.startupHealthcheckTimeout
- the time in seconds the healthcheck waits for a response from Elasticsearch on startup, i.e. when creating a client. The default value is1
.bulkWorkers
- number of workers for bulk processing. Default value is5
.cluster_name
- cluster name for different Kubernetes clusters. Default value isdefault
.
Like this:
--sink="elasticsearch:?nodes=http://127.0.0.1:9200&index=testMetric"
or
--sink="elasticsearch:?nodes=http://127.0.0.1:9200&index=testEvent"
In order to use AWS Managed Elastic we need to use one of the following methods:
- Making sure the public IPs of the Heapster are allowed on the ElasticSearch's Access Policy
-OR-
-
Configuring an Access Policy with IAM
- Configure the ElasticSearch cluster policy with IAM User
- Create a secret that stores the IAM credentials
- Expose the credentials to the environment variables:
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-heapster key: aws.id - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-heapster key: aws.secret
This sink supports monitoring metrics only. To use the graphite sink add the following flag:
--sink="graphite:<PROTOCOL>://<HOST>[:<PORT>][<?<OPTIONS>>]"
PROTOCOL must be tcp
or udp
, PORT is 2003 by default.
These options are available:
prefix
- Adds specified prefix to all metric paths
For example,
--sink="graphite:tcp://metrics.example.com:2003?prefix=kubernetes.example"
Metrics are sent to Graphite with this hierarchy:
PREFIX
cluster
namespaces
NAMESPACE
nodes
NODE
pods
NAMESPACE
POD
containers
CONTAINER
sys-containers
SYS-CONTAINER
Heapster can be configured to send k8s metrics and events to multiple sinks by specifying the--sink=...
flag multiple times.
For example, to send data to both gcm and influxdb at the same time, you can use the following:
--sink=gcm --sink=influxdb:http://monitoring-influxdb:80/