-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
filebeat error and excessive memory usage with autodiscover #9078
Comments
Thanks for opening this @bwunderlich824, could you share a broader log? I'm particularly interested in other messages, errors and timings. Best regards |
Not alot else in there besides what was posted above. |
Same issue running filebeat v6.5.1. OK when pods first restarted but after a while seeing the same issue on every filebeat pod in the daemonset. k8s version is 1.8.10. I'll enable debug and post when it happens again.
|
This seems to be the sequence every time the Watching API error hits:
|
Issues still persists with filebeat 6.5.3 |
Issues still persists with filebeat 6.5.4 |
Hi, we did some mayor refactor for 6.6: #8851 and I think it should help here, any chance you can give a try to the snapshot image? I pushed it to |
I pushed that version out to our development environment and I've had the same issue, the kubernetes pod just sucks up more memory until it reboots. Same logs as before. |
We are having this problem with filebeat 6.4.2 2019-01-31T03:04:11.614Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF |
I see a similar issue on my kubernetes clusters, filebeat will continue to use memory until exhausted, logging the messages described by @gamer22026. It's a fairly linear leak, I don't see any huge steps in usage: |
@cnelson Which version of filebeat are you using? |
I just tried with the latest 6.6.0 and Have the samee issues. The pods are each allowed up to 768mb of Memory, an enormous amount and they still run out. If it helps I'm a paying elasticsearch hosted customer and this issue has been going on for months, is there anything else I can do to get you guys to look more into this, it's getting really old. |
The issue persists with |
I'm having the same issue, and having the same messages being logged by filebeat, plus some other ones. I've posted more details on this other issue here: #9302 (comment) |
Upgraded to v7.0.1 and still having the same issues. |
this should be now fixed, more details can be found here: #9302 |
This is a new ticket based on the closed but on going issues noted here (#6503) filebeat logs show several errors and memory usage grows until the pod is shut down by kubernetes. I am using filebeat v6.4.3.
filebeat logs show the following errors:
2018-11-14T16:50:43.002Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error EOF
2018-11-14T16:50:43.004Z ERROR kubernetes/watcher.go:254 kubernetes: Watching API error proto: wrong wireType = 6 for field ServiceAccountName
My configuration:
The text was updated successfully, but these errors were encountered: