Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filebeat upgrade to v7.8.1 #1531

Closed
wants to merge 44 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
e1219af
Workarund restart rabbitmq pods during patching #1395
Jul 20, 2020
2d53074
fix due to review
Jul 31, 2020
f7fb93c
fixes after review, remove redundant code
Aug 6, 2020
a04d35b
Upgrade Filebeat to version 7.8.1
rafzei Aug 6, 2020
3eacc51
Upgrade Filebeat to version 7.8.1
rafzei Aug 6, 2020
3d2bd94
Merge branch 'issue846' of github.com:rafzei/epiphany into issue846
rafzei Aug 7, 2020
fcfbe39
Named demo configuration the same as generated one
tolikt Aug 7, 2020
3f94597
Added deletion step description
tolikt Aug 7, 2020
5cc61bf
Added a note related to versions for upgrades
tolikt Aug 7, 2020
fd7d82a
Fixed syntax errors
tolikt Aug 7, 2020
619a6a4
Added prerequisites section in upgrade doc
tolikt Aug 7, 2020
01488cb
Added key encoding troubleshooting info
tolikt Aug 7, 2020
ccd354b
Merge pull request #1536 from TolikT/feature/update-doc
mkyc Aug 10, 2020
c3295a0
Test fixes for RabbitMQ 3.8.3 (#1533)
przemyslavic Aug 10, 2020
19e43a5
Merge pull request #1492 from epiphany-platform/hotfix/rabbitmq-resta…
ar3ndt Aug 10, 2020
6801b3e
fix missing variable image rabbitmq
Aug 10, 2020
f4e3982
Merge pull request #1540 from ar3ndt/fix_rabbitmq_restart_pods
ar3ndt Aug 10, 2020
9ffa891
Add Kubernetes Dashboard to COMPONENTS.md (#1546)
rafzei Aug 11, 2020
2e4ce10
Update CHANGELOG-0.7.md
seriva Aug 11, 2020
bbc7062
Merge pull request #1547 from epiphany-platform/minor-changelog-patch
seriva Aug 11, 2020
795a0ac
Modified kubeadm config template with extra certificate SANs
tolikt Aug 11, 2020
038133b
CHANGELOG-0.7.md update v0.7.1 release date (#1552)
rafzei Aug 12, 2020
6b8a96e
Increment version string to 0.7.1 (#1554)
rafzei Aug 12, 2020
aa47855
Moved certificates related tasks into separate file
tolikt Aug 11, 2020
b4fec67
Moved apiserver certificates part into separate role
tolikt Aug 11, 2020
b39ca7c
Apply new certificates if cluster was initially created without addit…
tolikt Aug 11, 2020
e392356
Apply new certificates if promote_to_ha but cluster was initially cre…
tolikt Aug 11, 2020
a58aad4
Added quotes for Ansible var
tolikt Aug 12, 2020
eab76c6
Process all k8s master addresses
tolikt Aug 12, 2020
2e60eb0
Update kubeadm config before new certificates generation
tolikt Aug 12, 2020
46589ea
Moved k8s apiserver role to common role tasks
tolikt Aug 12, 2020
e01ae5f
Update in-cluster kubeadm config each time certs generatad
tolikt Aug 12, 2020
21aa743
Placed in-cluster update to separate file in common role
tolikt Aug 12, 2020
f68f656
Added localhost to apiserver certificate san
tolikt Aug 12, 2020
fb911d4
Renamed apiserver certificates tasks file name according to common pr…
tolikt Aug 12, 2020
0afe894
Update certifiates for non-designated automation masters
tolikt Aug 12, 2020
8818811
Added certificate update part in HA promotion
tolikt Aug 13, 2020
5e163df
Removed duplicated parts and left a comment
tolikt Aug 13, 2020
5900f51
Use current kubeadm config instead of template processing
tolikt Aug 14, 2020
379fb2c
Merge pull request #1556 from TolikT/issue/kubectl-update-san
atsikham Aug 18, 2020
577bd67
Upgrade Filebeat to version 7.8.1
rafzei Aug 6, 2020
3c4d355
Add CHANGELOG-0.8.md
rafzei Aug 19, 2020
0da7fc2
Changes after review
rafzei Aug 19, 2020
db5449b
Merge branch 'issue846' of github.com:rafzei/epiphany into issue846
rafzei Aug 19, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 3 additions & 6 deletions CHANGELOG-0.7.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# Changelog 0.7

## [0.7.1] 2020-07-xx
## [0.7.1] 2020-08-12

### Added

- Minor logging improvements added while fixing issue [#1424](https://github.com/epiphany-platform/epiphany/issues/1424)
- [#1438](https://github.com/epiphany-platform/epiphany/pull/1438) - Rename Terraform plugin vendor in VSCode recommendations
- [#1413](https://github.com/epiphany-platform/epiphany/issues/1413) - Set protocol for Vault only in one place in configuration
- [#1423](https://github.com/epiphany-platform/epiphany/issues/1423) - Error reading generated service principal

### Updated

Expand All @@ -28,11 +30,6 @@
- [#1336](https://github.com/epiphany-platform/epiphany/issues/1336) - Deployment of version 0.7.0 failed on-prem (spec.hostname)
- [#1394](https://github.com/epiphany-platform/epiphany/issues/1394) - Cannot access Kubernetes dashboard after upgrading

### Added

- [#1413](https://github.com/epiphany-platform/epiphany/issues/1413) - Set protocol for Vault only in one place in configuration
- [#1423](https://github.com/epiphany-platform/epiphany/issues/1423) - Error reading generated service principal

## [0.7.0] 2020-06-30

### Added
Expand Down
11 changes: 11 additions & 0 deletions CHANGELOG-0.8.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Changelog 0.8

## [0.8.0] 2020-09-xx

### Added

### Updated

- [#846](https://github.com/epiphany-platform/epiphany/issues/846) - Update Filebeat to v7.8.1

### Fixed
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,10 @@ This minimum file definition is fine to start with, if you need more control ove
epicli init -p aws -n demo --full
```

You will need to modify a few values (like your AWS secrets, directory path for SSH keys). Once you are done with `demo.yaml` you can start cluster deployment by executing:
You will need to modify a few values (like your AWS secrets, directory path for SSH keys). Once you are done with `demo.yml` you can start cluster deployment by executing:

```shell
epicli apply -f demo.yaml
epicli apply -f demo.yml
```
You will be asked for a password that will be used for encryption of some of build artifacts. More information [here](docs/home/howto/SECURITY.md#how-to-run-epicli-with-password)

Expand All @@ -63,6 +63,11 @@ epicli backup -f <file.yml> -b <build_folder>
epicli recovery -f <file.yml> -b <build_folder>
```

To delete all deployed components following command should be used

```shell
epicli delete -b <build_folder>
```

Find more information using table of contents below - especially the [How-to guides](docs/home/HOWTO.md).

Expand Down
2 changes: 1 addition & 1 deletion core/src/epicli/cli/version.txt.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.7.0
0.7.1
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
---
specification:
filebeat_version: "6.8.5"
filebeat_version: "7.8.1"
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ filebeat.inputs:
- type: log
enabled: true

# Paths (in alphabetical order) that should be crawled and fetched. Glob based paths.
# Paths that should be crawled and fetched. Glob based paths.
paths:
# - /var/log/audit/audit.log
- /var/log/auth.log
Expand All @@ -34,7 +34,7 @@ filebeat.inputs:
- /var/log/secure
- /var/log/syslog

# Exclude lines. A list of regular expressions to match. It drops the lines that are
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']

Expand Down Expand Up @@ -67,9 +67,10 @@ filebeat.inputs:
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after

{% if 'postgresql' in group_names %}

#--- PostgreSQL ---
# ============================== PostgreSQL ==============================

# Filebeat postgresql module doesn't support custom log_line_prefix (without patching), see https://discuss.elastic.co/t/filebeats-with-postgresql-module-custom-log-line-prefix/204457
# Dedicated configuration to handle log messages spanning multiple lines.
Expand All @@ -85,9 +86,10 @@ filebeat.inputs:
negate: true
match: after
{% endif %}

{% if 'kubernetes_master' in group_names or 'kubernetes_node' in group_names %}

#--- Kubernetes ---
# ============================== Kubernetes ==============================

# K8s metadata are fetched from Docker labels to not make Filebeat on worker nodes dependent on K8s master
# since Filebeat should start even if K8s master is not available.
Expand All @@ -112,7 +114,7 @@ filebeat.inputs:
- docker # Drop all fields added by 'add_docker_metadata' that were not renamed
{% endif %}

#============================= Filebeat modules ===============================
# ============================== Filebeat modules ==============================

filebeat.config.modules:
# Glob pattern for configuration loading
Expand All @@ -124,14 +126,14 @@ filebeat.config.modules:
# Period on which files under path should be checked for changes
#reload.period: 10s

#==================== Elasticsearch template setting ==========================
# ======================= Elasticsearch template setting =======================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================
# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
Expand All @@ -147,18 +149,54 @@ setup.template.settings:
# env: staging


#============================== Dashboards =====================================
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: true
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# ====================== Index Lifecycle Management (ILM) ======================

# Configure index lifecycle management (ILM). These settings create a write
# alias and add additional settings to the index template. When ILM is enabled,
# output.elasticsearch.index is ignored, and the write alias is used to set the
# index name.

# Enable ILM support. Valid values are true, false, and auto. When set to auto
# (the default), the Beat uses index lifecycle management when it connects to a
# cluster that supports ILM; otherwise, it creates daily indices.
# Disabled because ILM is not enabled by default in Epiphany
setup.ilm.enabled: false

# Set the prefix used in the index lifecycle write alias name. The default alias
# name is 'filebeat-%{[agent.version]}'.
#setup.ilm.rollover_alias: 'filebeat'

# Set the rollover index pattern. The default is "%{now/d}-000001".
#setup.ilm.pattern: "{now/d}-000001"

# Set the lifecycle policy name. The default policy name is
# 'beatname'.
#setup.ilm.policy_name: "mypolicy"

# The path to a JSON file that contains a lifecycle policy configuration. Used
# to load your own lifecycle policy.
#setup.ilm.policy_file:

# Disable the check for an existing lifecycle policy. The default is true. If
# you disable this check, set setup.ilm.overwrite: true so the lifecycle policy
# can be installed.
#setup.ilm.check_exists: true

# Overwrite the lifecycle policy at startup. The default is false.
#setup.ilm.overwrite: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
Expand All @@ -182,9 +220,9 @@ setup.template.settings:
# the Default Space will be used.
#space.id:

#============================= Elastic Cloud ==================================
# =============================== Elastic Cloud ================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
Expand All @@ -210,19 +248,24 @@ output.elasticsearch:
- "https://{{hostvars[host]['ansible_hostname']}}:9200"
{% endfor %}

# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.verification_mode: none
username: logstash
password: logstash
{% else %}
hosts: []
# Protocol - either `http` (default) or `https`.
#protocol: "https"

#ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
{% endif %}

#----------------------------- Logstash output --------------------------------
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
Expand All @@ -237,15 +280,17 @@ output.elasticsearch:
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================
# ================================= Processors =================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
#- add_host_metadata: ~
- add_cloud_metadata: ~
#- add_docker_metadata: ~
#- add_kubernetes_metadata: ~

#================================ Logging =====================================
# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
Expand All @@ -256,17 +301,30 @@ processors:
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ================================= Migration ==================================

# Enable the compatibility layer for Elastic Common Schema (ECS) fields.
# This allows to enable 6 > 7 migration aliases.
#migration.6_to_7.enabled: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
- name: Copy /etc/kubernetes/pki/apiserver.{crt,key}
copy:
dest: "{{ item }}.OLD"
src: "{{ item }}"
remote_src: true
loop:
- /etc/kubernetes/pki/apiserver.crt
- /etc/kubernetes/pki/apiserver.key

- name: Delete /etc/kubernetes/pki/apiserver.{crt,key}
file:
path: "{{ item }}"
state: absent
loop:
- /etc/kubernetes/pki/apiserver.crt
- /etc/kubernetes/pki/apiserver.key

- name: Render new certificates /etc/kubernetes/pki/apiserver.{crt,key}
shell: |
kubeadm init phase certs apiserver \
--config /etc/kubeadm/kubeadm-config.yml
args:
executable: /bin/bash
creates: /etc/kubernetes/pki/apiserver.key

- name: Restart apiserver
shell: |
docker ps \
--filter 'name=kube-apiserver_kube-apiserver' \
--format '{{ "{{.ID}}" }}' \
| xargs --no-run-if-empty docker kill
args:
executable: /bin/bash
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
- name: Collect kubeadm-config
shell: |
kubectl get configmap kubeadm-config \
--namespace kube-system \
--output jsonpath={{ jsonpath }}
vars:
jsonpath: >-
'{.data.ClusterConfiguration}'
environment:
KUBECONFIG: /etc/kubernetes/admin.conf
args:
executable: /bin/bash
register: kubeadm_config
changed_when: false

- name: Extend kubeadm config
set_fact:
kubeadm_config: >-
{{ original | combine(update, recursive=true) }}
vars:
original: >-
{{ kubeadm_config.stdout | from_yaml }}

- name: Render /etc/kubeadm/kubeadm-config.yml
copy:
dest: /etc/kubeadm/kubeadm-config.yml
content: >-
{{ kubeadm_config | to_nice_yaml }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
- name: Update in-cluster configuration
shell: |
kubeadm init phase upload-config kubeadm \
--config /etc/kubeadm/kubeadm-config.yml
args:
executable: /bin/bash
register: upload_config
until: upload_config is succeeded
retries: 30
delay: 10
Loading