Elasticsearch Docker Image with integrated elasticsearch-prometheus-exporter plugin
For a complete example with 2 elasticsearch nodes, kibana, prometheus and grafana have a look at xtermi2/sec-elasticsearch-prometheus/example/README.md
This docker image is hosted on docker hub: https://hub.docker.com/r/xtermi2/sec-elasticsearch-prometheus
This docker image extends the original elastic image, sets up default user passwords and installs elasticsearch-prometheus-exporter plugin.
At startup elasticsearch is configured with a set of default users:
- elastic: A admin user which has no restrictions
- kibana: The user Kibana uses to connect and communicate with Elasticsearch.
- beats_system: The user the Beats use when storing monitoring information in Elasticsearch.
- logstash_system: The user Logstash uses when storing monitoring information in Elasticsearch.
- apm_system: The user the APM server uses when storing monitoring information in Elasticsearch.
- remote_monitoring_user: The user Metricbeat uses when collecting and storing monitoring information in Elasticsearch.
This image also adds:
-
a user named
beats
, which has the same roles as the default beats_system user (plusbeats_admin
and filebeats_admin) and will have the same password. It's designed do manage and publish filebeats data to elasticsearch. -
a user named
kibana_user
, which has roleskibana_administrator
,reporting_user
andkibana_admin
and is designed to login to Kibana UI. kibana_administrator is a custom role, which grants admin rights for kibana and read access to all indices.
You are able to easily set passwords for these users via Docker environment variables.
This image is only usable securely and therefore requires certificates. In this image you need 2 types:
- a CA certificate, which signed the other certificate.
- a certificate + private key for the elasticsearch nodes which is used at REST and transport API.
These Certificates have to be mounted in the container at /usr/share/elasticsearch/config/certificates/
and must be configured
Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. The host has to run this command:
sudo sysctl -w vm.max_map_count=262144
You can set it permanently by modifying vm.max_map_count
setting in your /etc/sysctl.conf
.
- Passwords: All passwords are unset at default, so, if a password is not explicitly defined, the corresponding user is not available/usable!
- ELASTIC_PASSWORD: The password of the pre defined admin user 'elastic'.
- KIBANA_PASSWORD: The password of the pre defined 'kibana' user. This user is used and configured only in
kibana.yml
and haskibana_system
role. - KIBANA_USER_PASSWORD: The password of the 'kibana_user' user. This user is used to login to Kibana UI and has
kibana_user
role - BEATS_PASSWORD: The password of the pre defined 'beats'/'beats_system' user.
- LOGSTASH_PASSWORD: The password of the pre defined 'logstash_system' user.
- APM_PASSWORD: The password of the pre defined 'apm_system' user.
- REMOTE_MONITORING_PASSWORD: The password of the pre defined 'remote_monitoring_user' user.
You also have to set the elasticsearch related TLS/security configuration. Have a look at the example docker-compose.yml for an impression.
If you have any problems with or questions about this image, please ask for help through a GitHub issue.