-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amazon Security Lake integration - Logstash #135
Comments
Follow the Wazuh indexer integration using Logstash to install
# Install Logstash
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
echo "[logstash-8.x]" >> /etc/yum.repos.d/logstash.repo
echo "name=Elastic repository for 8.x packages" >> /etc/yum.repos.d/logstash.repo
echo "baseurl=https://artifacts.elastic.co/packages/8.x/yum" >> /etc/yum.repos.d/logstash.repo
echo "gpgcheck=1" >> /etc/yum.repos.d/logstash.repo
echo "gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch" >> /etc/yum.repos.d/logstash.repo
echo "enabled=1" >> /etc/yum.repos.d/logstash.repo
echo "autorefresh=1" >> /etc/yum.repos.d/logstash.repo
echo "type=rpm-md" >> /etc/yum.repos.d/logstash.repo
sudo yum install logstash
# Install plugins (logstash-output-s3 is already installed)
sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch # logstash-output-s3
# Copy certificates
mkdir -p /etc/logstash/wi-certs/
cp /etc/wazuh-indexer/certs/root-ca.pem /etc/logstash/wi-certs/root-ca.pem
chown logstash:logstash /etc/logstash/wi-certs/root-ca.pem
# Configuring new indexes
SKIP
# Configuring a pipeline
# Keystore
## Prepare keystore
set +o history
echo 'LOGSTASH_KEYSTORE_PASS="123456"'| sudo tee /etc/sysconfig/logstash
export LOGSTASH_KEYSTORE_PASS=123456
set -o history
sudo chown root /etc/sysconfig/logstash
sudo chmod 600 /etc/sysconfig/logstash
sudo systemctl start logstash
## Create keystore
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
## Store Wazuh indexer credentials (admin user)
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_USERNAME
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_PASSWORD
# Pipeline
sudo touch /etc/logstash/conf.d/wazuh-s3.conf
# Replace with cp /vagrant/wazuh-s3.conf /etc/logstash/conf.d/wazuh-s3.conf
sudo systemctl stop logstash
sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-s3.conf --path.settings /etc/logstash/
|- Success: `[INFO ][logstash.agent ] Pipelines running ...`
# Start Logstash
sudo systemctl enable logstash
sudo systemctl start logstash Output
Bibliography |
Enabled server-side encryption: https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html |
We analyzed the option of writing a custom Just for reference, in order to run sudo apt update
sudo apt install -y -V ca-certificates lsb-release wget ruby-dev build-essential
wget https://apache.jfrog.io/artifactory/arrow/$(lsb_release --id --short | tr 'A-Z' 'a-z')/apache-arrow-apt-source-latest-$(lsb_release --codename --short).deb
sudo apt install -y -V ./apache-arrow-apt-source-latest-$(lsb_release --codename --short).deb
sudo apt update
sudo apt install -y -V libarrow-dev # For C++
gem install red-arrow
gem install red-parquet Parquet output can be generated from a #!/usr/bin/env ruby
require 'arrow'
require 'parquet'
table = Arrow::Table.load("test.json", format: :json)
table.save("output.parquet") |
ConclusionsWe've got a base for the Logstash's pipeline and have verified it works. We'll evolve the pipeline depending on the chosen proposal to transform the data. Check #145 |
Description
Wazuh's Amazon Security Lake integration as source will use Logstash as a data forwarder. The data has to be forwarded from Wazuh's indices to an Amazon S3 bucket. Logstash provide input and output plugins that will allow us to do that.
Tasks
The text was updated successfully, but these errors were encountered: