Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Newest version of the container does not start up due to config errors #541

Closed
WolfgangMehner opened this issue Mar 5, 2021 · 1 comment · Fixed by #542
Closed

Newest version of the container does not start up due to config errors #541

WolfgangMehner opened this issue Mar 5, 2021 · 1 comment · Fixed by #542

Comments

@WolfgangMehner
Copy link

We are running a fluentd Daemonset on our Kubernetes cluster (v.1.19.6 on EKS)

We started up the most recent version of the image yesterday, and everything worked fine. We used the tag with the resulting digest:
v1-debian-elasticsearch7 -> 18d5a7158b5b
Today we started up containers with the same config and tag, but got a different container that did not start up anymore:
v1-debian-elasticsearch7 -> 2a7e2d1e42f3

When the container started up, we got an error along the lines of:

ConfigError error="<parse> section is required."

Unfortunately we can not provide the precise error message anymore, since nothing was logged with the fluentd PODs not starting up.

@moorepatrick
Copy link
Contributor

moorepatrick commented Mar 5, 2021

I'm seeing the same thing. A minor version was pushed last night but you were probably tagged on the major version.
v1-debian-elasticsearch7 is aliasing v1.12.0-debian-elasticsearch7-1.1 today, so try v1.12.0-debian-elasticsearch7-1.0
(Note that you need the full v1.12.0-debian)

In my case I'm seeing the addition of gem 'fluent-plugin-parser-cri' version '0.1.0' at startup, which looks to be a likely culprit. Still digging into a real solution.

Related: #521

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants