DomainTools Elastic Integration
- kibana plugin (via zip file)
- deployment/install necessities
- logstash config templates customer can use
- Elasticsearch
- Setup log source items (below)
- Kibana with DomainTools plugin
- Setup Logstash config
- Logstash
- Setup Filebeat config
- Filebeat
- Setup .env file
- DomainTools Backend Python Service (Docker)
/bin/bash -c “$(curl -fsSL https://github.com/DomainTools/elastic-integration/raw/main/install.sh)”
If you don't have a .env
file yet, this will create one from .env.example
and notify you to edit it and retry make install
- assuming your kibana install is located at /usr/share/kibana
/usr/share/kibana/bin/kibana-plugin install https://github.com/DomainTools/elastic-integration/raw/main/domaintools[elastic-version]-[DomainTools version].zip
Items being setup:
- ILM Policy - handler of rotating related event source indices
- Index template - the template (mappings, etc) to use for newly created indices in the ILM policy
- Initial index - the starter index (mapping matches index template mapping)
./setup/run.sh
If these are not setup in Elasticsearch then when Logstash sends events to Elasticsearch, the event index rolloever process will not work and our parsing of that index will fail.
To test and verify logstash config files run:
/usr/share/logstash/bin/logstash -f logstash_test.conf --path.settings /etc/logstash -t
/user/share/logstash/bin/logstash
is the logstash executable-f logstash_test.conf
is the flag to point to the config file--path.settings /etc/logstash
is the flag to point to the directory holding the logstash.yml file-t
is the flag to test the config
Solution: restart dt_service_1 python background docker container to recreate indices
Solution:
- Can kibana talk to python service?
- Are the DomainTools API credentials correct?
Solution:
- Make sure the field in logstash is being extracted correctly
- Make sure service url is pointing to the service