-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filebeat upgrade to v7.8.1 #1531
Closed
Closed
Changes from 3 commits
Commits
Show all changes
44 commits
Select commit
Hold shift + click to select a range
e1219af
Workarund restart rabbitmq pods during patching #1395
2d53074
fix due to review
f7fb93c
fixes after review, remove redundant code
a04d35b
Upgrade Filebeat to version 7.8.1
rafzei 3eacc51
Upgrade Filebeat to version 7.8.1
rafzei 3d2bd94
Merge branch 'issue846' of github.com:rafzei/epiphany into issue846
rafzei fcfbe39
Named demo configuration the same as generated one
tolikt 3f94597
Added deletion step description
tolikt 5cc61bf
Added a note related to versions for upgrades
tolikt fd7d82a
Fixed syntax errors
tolikt 619a6a4
Added prerequisites section in upgrade doc
tolikt 01488cb
Added key encoding troubleshooting info
tolikt ccd354b
Merge pull request #1536 from TolikT/feature/update-doc
mkyc c3295a0
Test fixes for RabbitMQ 3.8.3 (#1533)
przemyslavic 19e43a5
Merge pull request #1492 from epiphany-platform/hotfix/rabbitmq-resta…
ar3ndt 6801b3e
fix missing variable image rabbitmq
f4e3982
Merge pull request #1540 from ar3ndt/fix_rabbitmq_restart_pods
ar3ndt 9ffa891
Add Kubernetes Dashboard to COMPONENTS.md (#1546)
rafzei 2e4ce10
Update CHANGELOG-0.7.md
seriva bbc7062
Merge pull request #1547 from epiphany-platform/minor-changelog-patch
seriva 795a0ac
Modified kubeadm config template with extra certificate SANs
tolikt 038133b
CHANGELOG-0.7.md update v0.7.1 release date (#1552)
rafzei 6b8a96e
Increment version string to 0.7.1 (#1554)
rafzei aa47855
Moved certificates related tasks into separate file
tolikt b4fec67
Moved apiserver certificates part into separate role
tolikt b39ca7c
Apply new certificates if cluster was initially created without addit…
tolikt e392356
Apply new certificates if promote_to_ha but cluster was initially cre…
tolikt a58aad4
Added quotes for Ansible var
tolikt eab76c6
Process all k8s master addresses
tolikt 2e60eb0
Update kubeadm config before new certificates generation
tolikt 46589ea
Moved k8s apiserver role to common role tasks
tolikt e01ae5f
Update in-cluster kubeadm config each time certs generatad
tolikt 21aa743
Placed in-cluster update to separate file in common role
tolikt f68f656
Added localhost to apiserver certificate san
tolikt fb911d4
Renamed apiserver certificates tasks file name according to common pr…
tolikt 0afe894
Update certifiates for non-designated automation masters
tolikt 8818811
Added certificate update part in HA promotion
tolikt 5e163df
Removed duplicated parts and left a comment
tolikt 5900f51
Use current kubeadm config instead of template processing
tolikt 379fb2c
Merge pull request #1556 from TolikT/issue/kubectl-update-san
atsikham 577bd67
Upgrade Filebeat to version 7.8.1
rafzei 3c4d355
Add CHANGELOG-0.8.md
rafzei 0da7fc2
Changes after review
rafzei db5449b
Merge branch 'issue846' of github.com:rafzei/epiphany into issue846
rafzei File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
2 changes: 1 addition & 1 deletion
2
core/src/epicli/data/common/ansible/playbooks/roles/filebeat/defaults/main.yml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,3 @@ | ||
--- | ||
specification: | ||
filebeat_version: "6.8.5" | ||
filebeat_version: "7.8.1" | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,7 +19,7 @@ filebeat.inputs: | |
- type: log | ||
enabled: true | ||
|
||
# Paths (in alphabetical order) that should be crawled and fetched. Glob based paths. | ||
# Paths that should be crawled and fetched. Glob based paths. | ||
paths: | ||
# - /var/log/audit/audit.log | ||
- /var/log/auth.log | ||
|
@@ -34,7 +34,7 @@ filebeat.inputs: | |
- /var/log/secure | ||
- /var/log/syslog | ||
|
||
# Exclude lines. A list of regular expressions to match. It drops the lines that are | ||
# Exclude lines. A list of regular expressions to match. It drops the lines that are | ||
# matching any regular expression from the list. | ||
#exclude_lines: ['^DBG'] | ||
|
||
|
@@ -67,9 +67,10 @@ filebeat.inputs: | |
# that was (not) matched before or after or as long as a pattern is not matched based on negate. | ||
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash | ||
#multiline.match: after | ||
|
||
{% if 'postgresql' in group_names %} | ||
|
||
#--- PostgreSQL --- | ||
# ============================== PostgreSQL ============================== | ||
|
||
# Filebeat postgresql module doesn't support custom log_line_prefix (without patching), see https://discuss.elastic.co/t/filebeats-with-postgresql-module-custom-log-line-prefix/204457 | ||
# Dedicated configuration to handle log messages spanning multiple lines. | ||
|
@@ -85,9 +86,10 @@ filebeat.inputs: | |
negate: true | ||
match: after | ||
{% endif %} | ||
|
||
{% if 'kubernetes_master' in group_names or 'kubernetes_node' in group_names %} | ||
|
||
#--- Kubernetes --- | ||
# ============================== Kubernetes ============================== | ||
|
||
# K8s metadata are fetched from Docker labels to not make Filebeat on worker nodes dependent on K8s master | ||
# since Filebeat should start even if K8s master is not available. | ||
|
@@ -112,7 +114,7 @@ filebeat.inputs: | |
- docker # Drop all fields added by 'add_docker_metadata' that were not renamed | ||
{% endif %} | ||
|
||
#============================= Filebeat modules =============================== | ||
# ============================== Filebeat modules ============================== | ||
|
||
filebeat.config.modules: | ||
# Glob pattern for configuration loading | ||
|
@@ -124,14 +126,14 @@ filebeat.config.modules: | |
# Period on which files under path should be checked for changes | ||
#reload.period: 10s | ||
|
||
#==================== Elasticsearch template setting ========================== | ||
# ======================= Elasticsearch template setting ======================= | ||
|
||
setup.template.settings: | ||
index.number_of_shards: 3 | ||
#index.codec: best_compression | ||
#_source.enabled: false | ||
|
||
#================================ General ===================================== | ||
# ================================== General =================================== | ||
|
||
# The name of the shipper that publishes the network data. It can be used to group | ||
# all the transactions sent by a single shipper in the web interface. | ||
|
@@ -147,18 +149,54 @@ setup.template.settings: | |
# env: staging | ||
|
||
|
||
#============================== Dashboards ===================================== | ||
# ================================= Dashboards ================================= | ||
# These settings control loading the sample dashboards to the Kibana index. Loading | ||
# the dashboards is disabled by default and can be enabled either by setting the | ||
# options here, or by using the `-setup` CLI flag or the `setup` command. | ||
#setup.dashboards.enabled: true | ||
# options here or by using the `setup` command. | ||
#setup.dashboards.enabled: false | ||
|
||
# The URL from where to download the dashboards archive. By default this URL | ||
# has a value which is computed based on the Beat name and version. For released | ||
# versions, this URL points to the dashboard archive on the artifacts.elastic.co | ||
# website. | ||
#setup.dashboards.url: | ||
|
||
# ====================== Index Lifecycle Management (ILM) ====================== | ||
|
||
# Configure index lifecycle management (ILM). These settings create a write | ||
# alias and add additional settings to the index template. When ILM is enabled, | ||
# output.elasticsearch.index is ignored, and the write alias is used to set the | ||
# index name. | ||
|
||
# Enable ILM support. Valid values are true, false, and auto. When set to auto | ||
# (the default), the Beat uses index lifecycle management when it connects to a | ||
# cluster that supports ILM; otherwise, it creates daily indices. | ||
# Disabled because ILM is not enabled by default in Epiphany | ||
setup.ilm.enabled: false | ||
|
||
# Set the prefix used in the index lifecycle write alias name. The default alias | ||
# name is 'filebeat-%{[agent.version]}'. | ||
#setup.ilm.rollover_alias: 'filebeat' | ||
|
||
# Set the rollover index pattern. The default is "%{now/d}-000001". | ||
#setup.ilm.pattern: "{now/d}-000001" | ||
|
||
# Set the lifecycle policy name. The default policy name is | ||
# 'beatname'. | ||
#setup.ilm.policy_name: "mypolicy" | ||
|
||
# The path to a JSON file that contains a lifecycle policy configuration. Used | ||
# to load your own lifecycle policy. | ||
#setup.ilm.policy_file: | ||
|
||
# Disable the check for an existing lifecycle policy. The default is true. If | ||
# you disable this check, set setup.ilm.overwrite: true so the lifecycle policy | ||
# can be installed. | ||
#setup.ilm.check_exists: true | ||
|
||
# Overwrite the lifecycle policy at startup. The default is false. | ||
#setup.ilm.overwrite: false | ||
|
||
#============================== Kibana ===================================== | ||
|
||
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. | ||
|
@@ -182,9 +220,9 @@ setup.template.settings: | |
# the Default Space will be used. | ||
#space.id: | ||
|
||
#============================= Elastic Cloud ================================== | ||
# =============================== Elastic Cloud ================================ | ||
|
||
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/). | ||
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). | ||
|
||
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and | ||
# `setup.kibana.host` options. | ||
|
@@ -210,19 +248,24 @@ output.elasticsearch: | |
- "https://{{hostvars[host]['ansible_hostname']}}:9200" | ||
{% endfor %} | ||
|
||
# Protocol - either `http` (default) or `https`. | ||
protocol: "https" | ||
ssl.verification_mode: none | ||
username: logstash | ||
password: logstash | ||
{% else %} | ||
hosts: [] | ||
# Protocol - either `http` (default) or `https`. | ||
#protocol: "https" | ||
|
||
#ssl.verification_mode: none | ||
# Authentication credentials - either API key or username/password. | ||
#api_key: "id:api_key" | ||
#username: "elastic" | ||
#password: "changeme" | ||
{% endif %} | ||
|
||
#----------------------------- Logstash output -------------------------------- | ||
# ------------------------------ Logstash Output ------------------------------- | ||
#output.logstash: | ||
# The Logstash hosts | ||
#hosts: ["localhost:5044"] | ||
|
@@ -237,15 +280,17 @@ output.elasticsearch: | |
# Client Certificate Key | ||
#ssl.key: "/etc/pki/client/cert.key" | ||
|
||
#================================ Processors ===================================== | ||
# ================================= Processors ================================= | ||
|
||
# Configure processors to enhance or manipulate events generated by the beat. | ||
|
||
processors: | ||
#- add_host_metadata: ~ | ||
- add_cloud_metadata: ~ | ||
#- add_docker_metadata: ~ | ||
#- add_kubernetes_metadata: ~ | ||
|
||
#================================ Logging ===================================== | ||
# ================================== Logging =================================== | ||
|
||
# Sets log level. The default log level is info. | ||
# Available log levels are: error, warning, info, debug | ||
|
@@ -256,17 +301,30 @@ processors: | |
# "publish", "service". | ||
#logging.selectors: ["*"] | ||
|
||
#============================== Xpack Monitoring =============================== | ||
# filebeat can export internal metrics to a central Elasticsearch monitoring | ||
# ============================= X-Pack Monitoring ============================== | ||
# Filebeat can export internal metrics to a central Elasticsearch monitoring | ||
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The | ||
# reporting is disabled by default. | ||
|
||
# Set to true to enable the monitoring reporter. | ||
#xpack.monitoring.enabled: false | ||
#monitoring.enabled: false | ||
|
||
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this | ||
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch | ||
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch. | ||
#monitoring.cluster_uuid: | ||
|
||
# Uncomment to send the metrics to Elasticsearch. Most settings from the | ||
# Elasticsearch output are accepted here as well. Any setting that is not set is | ||
# automatically inherited from the Elasticsearch output configuration, so if you | ||
# have the Elasticsearch output configured, you can simply uncomment the | ||
# following line. | ||
#xpack.monitoring.elasticsearch: | ||
# Elasticsearch output are accepted here as well. | ||
# Note that the settings should point to your Elasticsearch *monitoring* cluster. | ||
# Any setting that is not set is automatically inherited from the Elasticsearch | ||
# output configuration, so if you have the Elasticsearch output configured such | ||
# that it is pointing to your Elasticsearch monitoring cluster, you can simply | ||
# uncomment the following line. | ||
#monitoring.elasticsearch: | ||
|
||
# ================================= Migration ================================== | ||
|
||
# Enable the compatibility layer for Elastic Common Schema (ECS) fields. | ||
# This allows to enable 6 > 7 migration aliases. | ||
#migration.6_to_7.enabled: true | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. \n |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -35,7 +35,7 @@ elasticsearch-oss-6.8.5 | |
elasticsearch-oss-7.3.2 # Open Distro for Elasticsearch | ||
erlang-21.3.8.7 | ||
ethtool | ||
filebeat-6.8.5 # actually it's filebeat-oss | ||
filebeat-7.8.1 # actually it's filebeat-oss | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. same here |
||
firewalld | ||
fontconfig # for grafana | ||
fping | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
\n