Skip to content
/ elastdocker Public template

🐳 Elastic Stack (ELK) v8+ on Docker with Compose. Pre-configured out of the box to enable Logging, Metrics, APM, Alerting, ML, and SIEM features. Up with a Single Command.

License

Notifications You must be signed in to change notification settings

sherifabdlnaby/elastdocker

Repository files navigation

Elastic Stack on Docker

Preconfigured Security, Tools, and Self-Monitoring

Configured to be ready to be used for Log, Metrics, APM, Alerting, Machine Learning, and Security (SIEM) usecases.

Elastic Stack Version 7^^ contributions welcome GitHub forks GitHub issues GitHub license

Introduction

Elastic Stack (ELK) Docker Composition, preconfigured with Security, Monitoring, and Tools; Up with a Single Command.

Suitable for Demoing, MVPs and small production deployments.

Stack Version: 8.10.2 πŸŽ‰ - Based on Official Elastic Docker Images

You can change Elastic Stack version by setting ELK_VERSION in .env file and rebuild your images. Any version >= 8.0.0 is compatible with this template.

Main Features πŸ“œ

  • Configured as a Production Single Node Cluster. (With a multi-node cluster option for experimenting).
  • Security Enabled By Default.
  • Configured to Enable:
    • Logging & Metrics Ingestion
      • Option to collect logs of all Docker Containers running on the host. via make collect-docker-logs.
    • APM
    • Alerting
    • Machine Learning
    • Anomaly Detection
    • SIEM (Security information and event management).
    • Enabling Trial License
  • Use Docker-Compose and .env to configure your entire stack parameters.
  • Persist Elasticsearch's Keystore and SSL Certifications.
  • Self-Monitoring Metrics Enabled.
  • Prometheus Exporters for Stack Metrics.
  • Embedded Container Healthchecks for Stack Images.

More points

And comparing Elastdocker and the popular deviantony/docker-elk

Expand...

One of the most popular ELK on Docker repositories is the awesome deviantony/docker-elk. Elastdocker differs from deviantony/docker-elk in the following points.

  • Security enabled by default using Basic license, not Trial.

  • Persisting data by default in a volume.

  • Run in Production Mode (by enabling SSL on Transport Layer, and add initial master node settings).

  • Persisting Generated Keystore, and create an extendable script that makes it easier to recreate it every-time the container is created.

  • Parameterize credentials in .env instead of hardcoding elastich:changeme in every component config.

  • Parameterize all other Config like Heap Size.

  • Add recommended environment configurations as Ulimits and Swap disable to the docker-compose.

  • Make it ready to be extended into a multinode cluster.

  • Configuring the Self-Monitoring and the Filebeat agent that ship ELK logs to ELK itself. (as a step to shipping it to a monitoring cluster in the future).

  • Configured Prometheus Exporters.

  • The Makefile that simplifies everything into some simple commands.


Requirements

Setup

  1. Clone the Repository

    git clone https://github.com/sherifabdlnaby/elastdocker.git
  2. Initialize Elasticsearch Keystore and TLS Self-Signed Certificates

    $ make setup

    For Linux's docker hosts only. By default virtual memory is not enough so run the next command as root sysctl -w vm.max_map_count=262144

  3. Start Elastic Stack

    $ make elk           <OR>         $ docker-compose up -d		<OR>		$ docker compose up -d
  4. Visit Kibana at https://localhost:5601 or https://<your_public_ip>:5601

    Default Username: elastic, Password: changeme

    • Notice that Kibana is configured to use HTTPS, so you'll need to write https:// before localhost:5601 in the browser.
    • Modify .env file for your needs, most importantly ELASTIC_PASSWORD that setup your superuser elastic's password, ELASTICSEARCH_HEAP & LOGSTASH_HEAP for Elasticsearch & Logstash Heap Size.

Whatever your Host (e.g AWS EC2, Azure, DigitalOcean, or on-premise server), once you expose your host to the network, ELK component will be accessible on their respective ports. Since the enabled TLS uses a self-signed certificate, it is recommended to SSL-Terminate public traffic using your signed certificates.

πŸƒπŸ»β€β™‚οΈ To start ingesting logs, you can start by running make collect-docker-logs which will collect your host's container logs.

Additional Commands

Expand

To Start Monitoring and Prometheus Exporters

$ make monitoring

To Ship Docker Container Logs to ELK

$ make collect-docker-logs

To Start Elastic Stack, Tools and Monitoring

$ make all

To Start 2 Extra Elasticsearch nodes (recommended for experimenting only)

$ make nodes

To Rebuild Images

$ make build

Bring down the stack.

$ make down

Reset everything, Remove all containers, and delete DATA!

$ make prune

Configuration

  • Some Configuration are parameterized in the .env file.
    • ELASTIC_PASSWORD, user elastic's password (default: changeme pls).
    • ELK_VERSION Elastic Stack Version (default: 8.10.2)
    • ELASTICSEARCH_HEAP, how much Elasticsearch allocate from memory (default: 1GB -good for development only-)
    • LOGSTASH_HEAP, how much Logstash allocate from memory.
    • Other configurations which their such as cluster name, and node name, etc.
  • Elasticsearch Configuration in elasticsearch.yml at ./elasticsearch/config.
  • Logstash Configuration in logstash.yml at ./logstash/config/logstash.yml.
  • Logstash Pipeline in main.conf at ./logstash/pipeline/main.conf.
  • Kibana Configuration in kibana.yml at ./kibana/config.

Setting Up Keystore

You can extend the Keystore generation script by adding keys to ./setup/keystore.sh script. (e.g Add S3 Snapshot Repository Credentials)

To Re-generate Keystore:

make keystore

Notes

  • ⚠️ Elasticsearch HTTP layer is using SSL, thus mean you need to configure your elasticsearch clients with the CA in secrets/certs/ca/ca.crt, or configure client to ignore SSL Certificate Verification (e.g --insecure in curl).

  • Adding Two Extra Nodes to the cluster will make the cluster depending on them and won't start without them again.

  • Makefile is a wrapper around Docker-Compose commands, use make help to know every command.

  • Elasticsearch will save its data to a volume named elasticsearch-data

  • Elasticsearch Keystore (that contains passwords and credentials) and SSL Certificate are generated in the ./secrets directory by the setup command.

  • Make sure to run make setup if you changed ELASTIC_PASSWORD and to restart the stack afterwards.

  • For Linux Users it's recommended to set the following configuration (run as root)

    sysctl -w vm.max_map_count=262144
    

    By default, Virtual Memory is not enough.


Intro Alerting Maps ML

Working with Elastic APM

After completing the setup step, you will notice a container named apm-server which gives you deeper visibility into your applications and can help you to identify and resolve root cause issues with correlated traces, logs, and metrics.

Authenticating with Elastic APM

In order to authenticate with Elastic APM, you will need the following:

  • The value of ELASTIC_APM_SECRET_TOKEN defined in .env file as we have secret token enabled by default
  • The ability to reach port 8200
  • Install elastic apm client in your application e.g. for NodeJS based applications you need to install elastic-apm-node
  • Import the package in your application and call the start function, In case of NodeJS based application you can do the following:
const apm = require('elastic-apm-node').start({
  serviceName: 'foobar',
  secretToken: process.env.ELASTIC_APM_SECRET_TOKEN,
  
  // https is enabled by default as per elastdocker configuration
  serverUrl: 'https://localhost:8200',
})

Make sure that the agent is started before you require any other modules in your Node.js application - i.e. before express, http, etc. as mentioned in Elastic APM Agent - NodeJS initialization

For more details or other languages you can check the following:

Monitoring The Cluster

Via Self-Monitoring

Head to Stack Monitoring tab in Kibana to see cluster metrics for all stack components.

Overview Moniroting

In Production, cluster metrics should be shipped to another dedicated monitoring cluster.

Via Prometheus Exporters

If you started Prometheus Exporters using make monitoring command. Prometheus Exporters will expose metrics at the following ports.

Prometheus Exporter Port Recommended Grafana Dashboard
elasticsearch-exporter 9114 Elasticsearch by Kristian Jensen
logstash-exporter 9304 logstash-monitoring by dpavlos

Metrics

License

MIT License Copyright (c) 2022 Sherif Abdel-Naby

Contribution

PR(s) are Open and Welcomed.