Skip to content

Break-up Logstash docs + Add Centralized Pipeline Management documentation #8471

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
:page_id: logstash
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]
[id="{p}-{page_id}"]
= Run {ls} on ECK

NOTE: Running {ls} on ECK is compatible only with {ls} 8.7+.

This section describes how to configure and deploy {ls} with ECK.

* <<{p}-logstash-quickstart>>
* <<{p}-logstash-configuration>>
** <<{p}-logstash-configuring-logstash>>
** <<{p}-logstash-pipelines>>
** <<{p}-logstash-volumes>>
** <<{p}-logstash-pipelines-es>>
** <<{p}-logstash-expose-services>>
* <<{p}-logstash-securing-api>>
* <<{p}-logstash-plugins>>
** <<{p}-plugin-resources>>
** <<{p}-logstash-working-with-plugins-scaling>>
** <<{p}-logstash-working-with-plugin-considerations>>
** <<{p}-logstash-working-with-custom-plugins>>
* <<{p}-logstash-configuration-examples>>
* <<{p}-logstash-update-strategy>>
* <<{p}-logstash-advanced-configuration>>
** <<{p}-logstash-jvm-options>>
** <<{p}-logstash-keystore>>
* <<{p}-central-pipeline-management>>
include::logstash/quickstart.asciidoc[leveloffset=+1]
include::logstash/configuration.asciidoc[leveloffset=+1]
include::logstash/securing.asciidoc[leveloffset=+1]
include::logstash/plugins.asciidoc[leveloffset=+1]
include::logstash/configuration-examples.asciidoc[leveloffset=+1]
include::logstash/update-strategy.asciidoc[leveloffset=+1]
include::logstash/advanced-configuration.asciidoc[leveloffset=+1]
include::logstash/central-pipeline-management.asciidoc[leveloffset=+1]
1,530 changes: 0 additions & 1,530 deletions docs/orchestrating-elastic-stack-applications/logstash.asciidoc

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
:parent_page_id: logstash-specification
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]

[id="{p}-logstash-advanced-configuration"]
= Advanced configuration

[id="{p}-logstash-jvm-options"]
== Setting JVM options

You can change JVM settings by using the `LS_JAVA_OPTS` environment variable to override default settings in `jvm.options`. This approach ensures that expected settings from `jvm.options` are set, and only options that explicitly need to be overridden are.

To do, this, set the `LS_JAVA_OPTS` environment variable in the container definition of your {ls} resource:

[source,yaml,subs="attributes,+macros,callouts"]
----
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: quickstart
spec:
podTemplate:
spec:
containers:
- name: logstash
env:
- name: LS_JAVA_OPTS <1>
value: "-Xmx2g -Xms2g"
----
<1> This will change the maximum and minimum heap size of the JVM on each pod to 2GB

[id="{p}-logstash-keystore"]
== Setting keystore

You can specify sensitive settings with {k8s} secrets. ECK automatically injects these settings into the keystore before it starts {ls}.
The ECK operator continues to watch the secrets for changes and will restart {ls} Pods when it detects a change.

The {ls} Keystore can be password protected by setting an environment variable called `LOGSTASH_KEYSTORE_PASS`. Check out {logstash-ref}/keystore.html#keystore-password[{ls} Keystore] documentation for details.

[source,yaml,subs="attributes,+macros,callouts"]
----
apiVersion: v1
kind: Secret
metadata:
name: logstash-keystore-pass
stringData:
LOGSTASH_KEYSTORE_PASS: changed <1>
---
apiVersion: v1
kind: Secret
metadata:
name: logstash-secure-settings
stringData:
HELLO: Hallo
---
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: logstash-sample
spec:
version: {version}
count: 1
pipelines:
- pipeline.id: main
config.string: |-
input { exec { command => 'uptime' interval => 10 } }
filter {
if ("${HELLO:}" != "") { <2>
mutate { add_tag => ["awesome"] }
}
}
secureSettings:
- secretName: logstash-secure-settings
podTemplate:
spec:
containers:
- name: logstash
env:
- name: LOGSTASH_KEYSTORE_PASS
valueFrom:
secretKeyRef:
name: logstash-keystore-pass
key: LOGSTASH_KEYSTORE_PASS
----
<1> Value of password to protect the {ls} keystore
<2> The syntax for referencing keys is identical to the syntax for environment variables

Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
:parent_page_id: logstash-specification
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]

[id="{p}-central-pipeline-management"]
= Centralized Pipeline Management

The following Configuration is an example on how to enable link:{logstash-ref}/logstash-centralized-pipeline-management.html[Centralized Pipeline Management] when deploying {ls}, {es} and {kib} on ECK.

[source,yaml,subs="attributes,+macros,callouts"]
----
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-sample
spec:
version: {version}
nodeSets:
- name: default
count: 3
config:
xpack.license.self_generated.type: trial <1>
---
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: logstash-sample
spec:
count: 1
version: {version}
elasticsearchRefs:
- clusterName: production
name: elasticsearch-sample <2>
config:
xpack.management.enabled: true <3>
xpack.management.elasticsearch.hosts: "${PRODUCTION_ES_HOSTS}" <4>
xpack.management.elasticsearch.username: "${PRODUCTION_ES_USER}"
xpack.management.elasticsearch.password: "${PRODUCTION_ES_PASSWORD}"
xpack.management.elasticsearch.ssl.certificate_authority: "${PRODUCTION_ES_SSL_CERTIFICATE_AUTHORITY}"
xpack.management.pipeline.id: ["*somekeys*"] <5>
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana-sample
spec:
version: {version}
elasticsearchRef:
name: elasticsearch-sample
count: 1
----

<1> Centralized Pipeline Management is a Licensed feature, so a license is required. Here we enable the trial license

<2> Create a reference to the {es} cluster to manage the {ls-pipelines}

<3> Enable Centralized Pipeline Management on {ls}

<4> Use the hosts of the {es} referenced in the `elasticsearchRefs`

<5> Pipeline ids that are managed from {es}
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
:parent_page_id: logstash-specification
:logstash_recipes: https://raw.githubusercontent.com/elastic/cloud-on-k8s/{eck_release_branch}/config/recipes/logstash
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]

[id="{p}-logstash-configuration-examples"]
= Configuration examples

This section contains manifests that illustrate common use cases, and can be your starting point in exploring {ls} deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured {k8s} cluster. They all contain a three-node {es} cluster and a single {kib} instance.

CAUTION: The examples in this section are for illustration purposes only. They should not be considered production-ready.
Some of these examples use the `node.store.allow_mmap: false` setting on {es} which has performance implications and should be tuned for production workloads, as described in <<{p}-virtual-memory>>.


[id="{p}-logstash-configuration-single-pipeline-crd"]
== Single pipeline defined in CRD

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-eck.yaml
----

Deploys {ls} with a single pipeline defined in the CRD

[id="{p}-logstash-configuration-single-pipeline-secret"]
== Single Pipeline defined in Secret

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-pipeline-as-secret.yaml
----

Deploys {ls} with a single pipeline defined in a secret, referenced by a `pipelineRef`

[id="{p}-logstash-configuration-pipeline-volume"]
== Pipeline configuration in mounted volume

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-pipeline-as-volume.yaml
----

Deploys {ls} with a single pipeline defined in a secret, mounted as a volume, and referenced by
`path.config`

[id="{p}-logstash-configuration-custom-index"]
== Writing to a custom {es} index

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-es-role.yaml
----

Deploys {ls} and {es}, and creates an updated version of the `eck_logstash_user_role` to write to a user specified index.

[id="{p}-logstash-configuration-pq-dlq"]
== Creating persistent volumes for PQ and DLQ

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-volumes.yaml
----

Deploys {ls}, {beats} and {es}. {ls} is configured with two pipelines:

* a main pipeline for reading from the {beats} instance, which will send to the DLQ if it is unable to write to {es}
* a second pipeline, that will read from the DLQ.
In addition, persistent queues are set up.
This example shows how to configure persistent volumes outside of the default `logstash-data` persistent volume.


[id="{p}-logstash-configuration-stack-monitoring"]
== {es} and {kib} Stack Monitoring

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-monitored.yaml
----

Deploys an {es} and {kib} monitoring cluster, and a {ls} that will send its monitoring information to this cluster. You can view the stack monitoring information in the monitoring cluster's {kib}

[id="{p}-logstash-configuration-multiple-pipelines"]
== Multiple pipelines/multiple {es} clusters

[source,sh,subs="attributes"]
----
kubectl apply -f {logstash_recipes}/logstash-multi.yaml
----

Deploys {es} in prod and qa configurations, running in separate namespaces. {ls} is configured with a multiple pipeline->pipeline configuration, with a source pipeline routing to `prod` and `qa` pipelines.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
:parent_page_id: logstash-specification
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]

[id="{p}-logstash-quickstart"]
= Quickstart

Add the following specification to create a minimal {ls} deployment that will listen to a {beats} agent or {agent} configured to send to {ls} on port 5044, create the service and write the output to an {es} cluster named `quickstart`, created in the link:k8s-quickstart.html[{es} quickstart].

[source,yaml,subs="attributes,+macros,callouts"]
----
cat $$<<$$'EOF' | kubectl apply -f -
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: quickstart
spec:
count: 1
elasticsearchRefs:
- name: quickstart
clusterName: qs
version: {version}
pipelines:
- pipeline.id: main
config.string: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => [ "${QS_ES_HOSTS}" ]
user => "${QS_ES_USER}"
password => "${QS_ES_PASSWORD}"
ssl_certificate_authorities => "${QS_ES_SSL_CERTIFICATE_AUTHORITY}"
}
}
services:
- name: beats
service:
spec:
type: NodePort
ports:
- port: 5044
name: "filebeat"
protocol: TCP
targetPort: 5044
EOF
----

Check <<{p}-logstash-configuration-examples>> for more ready-to-use manifests.

. Check the status of {ls}
+
[source,sh]
----
kubectl get logstash
----
+
[source,sh,subs="attributes"]
----
NAME AVAILABLE EXPECTED AGE VERSION
quickstart 3 3 4s {version}
----

. List all the Pods that belong to a given {ls} specification.
+
[source,sh]
----
kubectl get pods --selector='logstash.k8s.elastic.co/name=quickstart'
----
+
[source,sh]
----
NAME READY STATUS RESTARTS AGE
quickstart-ls-0 1/1 Running 0 91s
----

. Access logs for a {ls} Pod.

[source,sh]
----
kubectl logs -f quickstart-ls-0
----
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
:parent_page_id: logstash-specification
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]

[id="{p}-logstash-securing-api"]
= Securing {ls} API

[id="{p}-logstash-https"]
== Enable HTTPS

Access to the link:{logstash-ref}/monitoring-logstash.html#monitoring-api-security[{ls} Monitoring APIs] use HTTPS by default - the operator will set the values `api.ssl.enabled: true`, `api.ssl.keystore.path` and `api.ssl.keystore.password`.

You can further secure the {ls} Monitoring APIs by requiring HTTP Basic authentication by setting `api.auth.type: basic`, and providing the relevant credentials `api.auth.basic.username` and `api.auth.basic.password`:

[source,yaml,subs="attributes,+macros,callouts"]
----
apiVersion: v1
kind: Secret
metadata:
name: logstash-api-secret
stringData:
API_USERNAME: "AWESOME_USER" <1>
API_PASSWORD: "T0p_Secret" <1>
---
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: logstash-sample
spec:
version: {version}
count: 1
config:
api.auth.type: basic
api.auth.basic.username: "${API_USERNAME}" <3>
api.auth.basic.password: "${API_PASSWORD}" <3>
podTemplate:
spec:
containers:
- name: logstash
envFrom:
- secretRef:
name: logstash-api-secret <2>
----
<1> Store the username and password in a Secret.
<2> Map the username and password to the environment variables of the Pod.
<3> At {ls} startup, `${API_USERNAME}` and `${API_PASSWORD}` are replaced by the value of environment variables. Check link:{logstash-ref}/environment-variables.html[using environment variables] for more details.

An alternative is to set up <<{p}-logstash-keystore, keystore>> to resolve `${API_USERNAME}` and `${API_PASSWORD}`

NOTE: The variable substitution in `config` does not support the default value syntax.

[id="{p}-logstash-http-tls-keystore"]
== TLS keystore

The TLS Keystore is automatically generated and includes a certificate and a private key, with default password protection set to `changeit`.
This password can be modified by configuring the `api.ssl.keystore.password` value.

[source,yaml,subs="attributes"]
----
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: logstash-sample
spec:
count: 1
version: {version}
config:
api.ssl.keystore.password: "${SSL_KEYSTORE_PASSWORD}"
----


[id="{p}-logstash-http-custom-tls"]
== Provide your own certificate

If you want to use your own certificate, the required configuration is similar to {es}. Configure the certificate in `api` Service. Check <<{p}-custom-http-certificate>>.

[source,yaml,subs="attributes,+macros,callouts"]
----
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: logstash-sample
spec:
version: {version}
count: 1
elasticsearchRef:
name: "elasticsearch-sample"
services:
- name: api <1>
tls:
certificate:
secretName: my-cert
----
<1> The service name `api` is reserved for {ls} monitoring endpoint.

[id="{p}-logstash-http-disable-tls"]
== Disable TLS

You can disable TLS by disabling the generation of the self-signed certificate in the API service definition

[source,yaml,subs="attributes"]
----
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: logstash-sample
spec:
version: {version}
count: 1
elasticsearchRef:
name: "elasticsearch-sample"
services:
- name: api
tls:
selfSignedCertificate:
disabled: true
----
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
:parent_page_id: logstash-specification
ifdef::env-github[]
****
link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View this document on the Elastic website]
****
endif::[]

[id="{p}-logstash-update-strategy"]
= Update Strategy

The operator takes a Pod down to restart and applies a new configuration value. All Pods are restarted in reverse ordinal order.

== Default behavior

When `updateStrategy` is not present in the specification, it defaults to the following:

[source,yaml,subs="attributes,+macros,callouts"]
----
spec:
updateStrategy:
type: "RollingUpdate" <1>
rollingUpdate:
partition: 0 <2>
maxUnavailable: 1 <3>
----

<1> The `RollingUpdate` strategy will update Pods one by one in reverse ordinal order.
<2> This means that all the Pods from ordinal Replicas-1 to `partition` are updated . You can split the update into partitions to perform link:https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-out-a-canary[canary rollout].
<3> This ensures that the cluster has no more than one unavailable Pod at any given point in time.

== OnDelete

[source,yaml]
----
spec:
updateStrategy:
type: "OnDelete"
----

`OnDelete` strategy does not automatically update Pods when a modification is made. You need to restart Pods yourself.
Original file line number Diff line number Diff line change
@@ -39,7 +39,7 @@ include::agent-fleet.asciidoc[leveloffset=+1]
include::maps.asciidoc[leveloffset=+1]
include::enterprise-search.asciidoc[leveloffset=+1]
include::beat.asciidoc[leveloffset=+1]
include::logstash.asciidoc[leveloffset=+1]
include::logstash-specification.asciidoc[leveloffset=+1]
include::stack-helm-chart.asciidoc[leveloffset=+1]
include::recipes.asciidoc[leveloffset=+1]
include::securing-stack.asciidoc[leveloffset=+1]